Supercon: Alex Hornstein’s Adventures In Hacking The Lightfield

We are all familiar with the idea of a hologram, either from the monochromatic laser holographic images you’ll find on your bank card or from fictional depictions such as Princes Leia’s distress message from Star Wars. And we’ve probably read about how the laser holograms work with a split beam of coherent light recombined to fall upon a photographic plate. They require no special glasses or headsets and  possess both stereoscopic and spatial 3D rendering, in that you can view both the 3D Princess Leia and your bank’s logo or whatever is on your card as 3D objects from multiple angles. So we’re all familar with that holographic end product, but what we probably aren’t so familiar with is what they represent: the capture of a light field.

In his Hackaday Superconference talk, co-founder and CTO of holographic display startup Looking Glass Factory Alex Hornstein introduced us to the idea of the light field, and how its capture is key to  the understanding of the mechanics of a hologram.

Capturing the light field with a row of GoPro cameras.
Capturing the light field with a row of GoPro cameras.

His first point is an important one, he expands the definition of a hologram from its conventional form as one of those monochromatic laser-interference photographic images into any technology that captures a light field. This is, he concedes, a contentious barrier to overcome. To do that he first has to explain what a light field is.

When we take a 2D photograph, we capture all the rays of light that are incident upon something that is a good approximation to a single point, the lens of the camera involved. The scene before us has of course countless other rays that are incident upon other points or that are reflected from surfaces invisible from the single point position of the 2D camera. It is this complex array of light rays which makes up the light field of the image, and capturing it in its entirety is key to manipulating the result. This is true no matter the technology used to bring it to the viewer. A light field capture can be used to generate variable focus 2D images after the fact as is the case with the Lytro cameras, or it can be used to generate a hologram in the way that he describes.

One possible future use of the technology, a virtual holographic aquarium.
One possible future use of the technology, a virtual holographic aquarium.

The point of his talk is that complex sorcery isn’t required to capture a light field, something he demonstrates in front of the audience with a volunteer and a standard webcam on a sliding rail. Multiple 2D images are taken at different points, which can be combined to form a light field. The fact that not every component of the light field has been captured doesn’t matter as much as that there is enough to create the holographic image from the point of view of the display. And since he happens to be head honcho at a holographic display company he can show us the result. Looking Glass Factory’s display panel uses a lenticular lens to combine the multiple images into a hologram, and is probably one of the most inexpensive ways to practically display this type of image.

Since the arrival of the Lytro cameras a year or two ago the concept of a light field is one that has been in the air, but has more often been surrounded by an air of proprietary marketing woo. This talk breaks through that to deliver a clear explanation of the subject, and is a fascinating watch. Alex leaves us with news of some of the first light field derived video content being put online and with some decidedly science-fiction possible futures for the technology. Even if you aren’t planning to work in this field, you will almost certainly encounter it over the next few years.

Continue reading “Supercon: Alex Hornstein’s Adventures In Hacking The Lightfield”

Google Light Fields Trying To Get The Jump On Magic Leap

Light Field technology is a fascinating area of Virtual Reality research that emulates the way that light behaves to make a virtual scene look more realistic. By emulating light coming from multiple angles entering the eye, the scenes look more realistic because they look closer to reality. It is rumored to be part of the technology included in the forthcoming Magic Leap headset, but it looks like Google is trying to steal some of their thunder. The VR research arm of the search giant has released a VR app called Welcome to Light Fields that uses a similar technique on existing VR headsets, such as those from Oculus and Microsoft.

Continue reading “Google Light Fields Trying To Get The Jump On Magic Leap”

Magic Leap Finally Announced; Remains Mysterious

Yesterday Magic Leap announced that it will ship developer edition hardware in 2018. The company is best known for raising a lot of money. That’s only partially a joke, since the teased hardware has remained very mysterious and never been revealed, yet they have managed to raise nearly $2 billion through four rounds of funding (three of them raising more than $500 million each).

The announcement launched Magic Leap One — subtitled the Creator Edition — with a mailing list sign up for “designers, developers and creatives”. The gist is that the first round of hardware will be offered for sale to people who will write applications and create uses for the Magic Leap One.

We’ve gathered some info about the hardware, but we’ll certainly begin the guessing game on the specifics below. The one mystery that has been solved is how this technology is delivered: as a pair of goggles attaching to a dedicated processing unit. How does it stack up to current offerings?

Continue reading “Magic Leap Finally Announced; Remains Mysterious”

Capturing That (light Field) Moment

Yes, your eyes do not lie, that is 12 cameras rigged to take a picture at the exact same moment. The idea is a single camera loses data (namely depth) when it takes a 3D image and transposes it onto a 2D medium. FuturePicture somewhat circumvents this loss by taking several pictures with different focus distances. In short, the camera array allows you to focus on multiple items within a scene. The project’s hardware and software have yet to be released (we do know it’s at least Arduino), but they plan to make it entirely open source so everyone can experiment. Of course, we’ll keep you up to date.
[via Make]