Light Fields are a subtle but critical element to making 3D video look “real”, and it has little to do with either resolution or field of view. Meta (formerly Facebook) provides a look at a prototype VR headset that provides light field passthrough video to the user for a more realistic view of their surroundings, and it uses a nifty lens and aperture combination to make it happen.
As humans move our eyes (or our heads, for that matter) to take in a scene, we see things from slightly different perspectives in the process. These differences are important cues for our brains to interpret our world. But when cameras capture a scene, they capture it as a flat plane, which is different in a number of important ways from the manner in which our eyes work. A big reason stereoscopic 3D video doesn’t actually look particularly real is because the information it presents lacks these subtleties.
Back in 2012, technology websites were abuzz with news of the Lytro: a camera that was going to revolutionize photography thanks to its innovative light field technology. An array of microlenses in front of the sensor let it capture a 3D image of a scene from one point, allowing the user to extract depth information and to change the focus of an image even after capturing it.
The technology turned out to be a commercial failure however, and the company faded into obscurity. Lytro cameras can now be had for as little as $20 on the second-hand market, as [ea] found out when he started to investigate light field photography. They still work just as well as they ever did, but since the accompanying PC software is now definitely starting to show its age, [ea] decided to reverse-engineer the camera’s firmware so he could write his own application.
[ea] started by examining the camera’s hardware. The main CPU turned out to be a MIPS processor similar to those used in various cheap camera gadgets, next to what looked like an unpopulated socket for a serial port and a set of JTAG test points. The serial port was sending out a bootup sequence and a command prompt, but didn’t seem to respond to any inputs. Continue reading “Unlocking Hidden Features Of An Unusual Camera”→
Baseball jokes aside, holograms have been a dream for decades, and with devices finally around that support something like them, we have finally started to wonder how to make content for them. [Mike Rigsby] recently entered his stop-motion holographic setup into our sci-fi contest, and we love the idea.
Rather than a three-dimensional model or a 2d picture with pixels, the Looking Glass light field display supports a series of images as quantized points (hence light field). As you move around an object, images are interpolated between the frames you do know, giving a pretty convincing effect. In a traditional stop motion animation, you need to take anywhere between 12-24 frames to equal about one second of animation. Now that you need to take 48 pictures for one frame, over 1152 pictures for just one second of animation. Two problems quickly appear, how to take photographs accurately from the same position every time and how do you manage the deluge of photos sensibly. [Mike] started with a wooden stage for his actors. A magnet was mounted to the photo rail carriage, and a sensor allowed it to detect that it was in the same spot. An Arduino controls the rail, reads the magnet via a sensor, and controls the camera shutter. The DSLR he’s using can’t do that many frames per second, but that’s a problem for another sci-fi contest.
Holographic-ish displays are finally here, and they’re getting better. But if a display isn’t your speed, perhaps some laser-powered glasses can be the holographic experience you’re looking for?
3D video content has a significant limitation, one that is not trivial to solve. Video captured by a camera — even one with high resolution and a very wide field of view — still records a scene as a flat plane, from a fixed point of view. The limitation this brings will be familiar to anyone who has watched a 3D video (or “360 video”) in VR and moved their head the wrong way. In these videos one is free to look around, but may not change the position of their head in the process. Put another way, pivoting one’s head to look up, down, left, or right is fine. Moving one’s head higher, lower, closer, further, or to the side? None of that works. Natural movements like trying to peek over an object, or moving slightly to the side for a better view simply do not work.
We are all familiar with the idea of a hologram, either from the monochromatic laser holographic images you’ll find on your bank card or from fictional depictions such as Princes Leia’s distress message from Star Wars. And we’ve probably read about how the laser holograms work with a split beam of coherent light recombined to fall upon a photographic plate. They require no special glasses or headsets and possess both stereoscopic and spatial 3D rendering, in that you can view both the 3D Princess Leia and your bank’s logo or whatever is on your card as 3D objects from multiple angles. So we’re all familar with that holographic end product, but what we probably aren’t so familiar with is what they represent: the capture of a light field.
In his Hackaday Superconference talk, co-founder and CTO of holographic display startup Looking Glass Factory Alex Hornstein introduced us to the idea of the light field, and how its capture is key to the understanding of the mechanics of a hologram.
His first point is an important one, he expands the definition of a hologram from its conventional form as one of those monochromatic laser-interference photographic images into any technology that captures a light field. This is, he concedes, a contentious barrier to overcome. To do that he first has to explain what a light field is.
When we take a 2D photograph, we capture all the rays of light that are incident upon something that is a good approximation to a single point, the lens of the camera involved. The scene before us has of course countless other rays that are incident upon other points or that are reflected from surfaces invisible from the single point position of the 2D camera. It is this complex array of light rays which makes up the light field of the image, and capturing it in its entirety is key to manipulating the result. This is true no matter the technology used to bring it to the viewer. A light field capture can be used to generate variable focus 2D images after the fact as is the case with the Lytro cameras, or it can be used to generate a hologram in the way that he describes.
The point of his talk is that complex sorcery isn’t required to capture a light field, something he demonstrates in front of the audience with a volunteer and a standard webcam on a sliding rail. Multiple 2D images are taken at different points, which can be combined to form a light field. The fact that not every component of the light field has been captured doesn’t matter as much as that there is enough to create the holographic image from the point of view of the display. And since he happens to be head honcho at a holographic display company he can show us the result. Looking Glass Factory’s display panel uses a lenticular lens to combine the multiple images into a hologram, and is probably one of the most inexpensive ways to practically display this type of image.
Since the arrival of the Lytro cameras a year or two ago the concept of a light field is one that has been in the air, but has more often been surrounded by an air of proprietary marketing woo. This talk breaks through that to deliver a clear explanation of the subject, and is a fascinating watch. Alex leaves us with news of some of the first light field derived video content being put online and with some decidedly science-fiction possible futures for the technology. Even if you aren’t planning to work in this field, you will almost certainly encounter it over the next few years.
Light Field technology is a fascinating area of Virtual Reality research that emulates the way that light behaves to make a virtual scene look more realistic. By emulating light coming from multiple angles entering the eye, the scenes look more realistic because they look closer to reality. It is rumored to be part of the technology included in the forthcoming Magic Leap headset, but it looks like Google is trying to steal some of their thunder. The VR research arm of the search giant has released a VR app called Welcome to Light Fields that uses a similar technique on existing VR headsets, such as those from Oculus and Microsoft.
Yesterday Magic Leap announced that it will ship developer edition hardware in 2018. The company is best known for raising a lot of money. That’s only partially a joke, since the teased hardware has remained very mysterious and never been revealed, yet they have managed to raise nearly $2 billion through four rounds of funding (three of them raising more than $500 million each).
The announcement launched Magic Leap One — subtitled the Creator Edition — with a mailing list sign up for “designers, developers and creatives”. The gist is that the first round of hardware will be offered for sale to people who will write applications and create uses for the Magic Leap One.
We’ve gathered some info about the hardware, but we’ll certainly begin the guessing game on the specifics below. The one mystery that has been solved is how this technology is delivered: as a pair of goggles attaching to a dedicated processing unit. How does it stack up to current offerings?