Imagine you have a projector pointing at a scene, which you’re photographing with a camera aimed from a different point. Using the techniques of modelling light transport, [okooptics] has shown us how you can capture an image from the projector’s point of view, instead of the camera—and even synthetically light the scene however you might like.

The concept involves capturing data regarding how light is transported from the projector to the scene. This could be achieved by lighting one pixel of the projector at a time while capturing an image with the camera. However, even for a low-resolution projector, of say 256×256 pixels, this would require capturing 65536 individual images, and take a very long time. Instead, [okooptics] explains how the same task can be achieved by using binary coded images with the projector, which allow the same data to be captured using just seventeen exposures.
Once armed with this light transport data, it’s possible to do wild tricks. You can synthetically light the scene, as if the projector were displaying any novel lighting pattern of your choice. You can also construct a simulated photo taken from the projector’s perspective, and even do some rudimentary depth reconstruction. [okooptics] explains this tricky subject well, using visual demonstrations to indicate how it all works.
The work was inspired by the “Dual Photography” paper published at SIGGRAPH some time ago, a conference that continues to produce outrageously interesting work to this day.
Please also add the tag:
https://hackaday.com/tag/structured-light/
I mean this is literally how an Xbox kinect (and I think most face ID sensors) famously works, just in infrared.
Which Kinect? IIRC, the original Kinect uses a fixed constellation of infrared laser dots, and you probably couldn’t get an image from the PoV of this “projector” the way the OP is getting his. I think the new Kinect uses ToF.