Google Light Fields Trying To Get The Jump On Magic Leap

Light Field technology is a fascinating area of Virtual Reality research that emulates the way that light behaves to make a virtual scene look more realistic. By emulating light coming from multiple angles entering the eye, the scenes look more realistic because they look closer to reality. It is rumored to be part of the technology included in the forthcoming Magic Leap headset, but it looks like Google is trying to steal some of their thunder. The VR research arm of the search giant has released a VR app called Welcome to Light Fields that uses a similar technique on existing VR headsets, such as those from Oculus and Microsoft.

The magic sauce is in the way the image is captured, as Google uses a semicircular arrangement of 13 GoPro cameras that are rotated to capture about a thousand images. The captured images are then stitched together by Google’s software to create the final image, which has a light field effect. It is thought that the forthcoming Magic Leap headset needs special optics to create this effect but the Google version works on standard VR headsets. According to those who tried it, the effect works well, but has some quirks: it only works on still images at the moment, and any movement while the camera is rotating ruins the effect. A writer from Technology Review who got to try the Google software also notes that people in the shot don’t work: because they naturally follow the camera with their eyes, they seem to be following your view as you pan around the VR image, like one of those creepy portraits.

24 thoughts on “Google Light Fields Trying To Get The Jump On Magic Leap

  1. So many problems. These light fields are designed to allow a scene to be rendered from multiple view points. It has very little to do with light field displays. The functionality of any light field tech within Magic Leap is completely unknown, most promises are very likely to end up being vapor ware. Real light field displays already exist and a lot is known about them, such as NVIDIAs light field display.

    1. Didn’t you see that picture of the man holding up a rectangle of glass? That man was given 1 billiion dollars, therefore that glass is a real functioning lightfield display. despite is not having any indication of edge connector, bond-wires or phisical interface for data or power… it’s clearly wireless and uses inductive power.

      money = wisdom and truth in our society

      and $1b is never wrong in a world where logic, experience and skepticism are cheap/free

      1. He’s supposedly holding the “waveguide” optic. It would not have any wires, data, power, etc. going to it. But what it should have visible are its holographic optical elements (HOE), where a projection engine “injects” an image into the waveguide to be “guided” to the eye. This is done through 2 to 3 HOEs that expand and direct the rays into the eye from the projection engine. They would show up as rainbow’ish partial reflections in the substrate. These are not present in those pictures, most likley as they are a big part of the trade secret / patent process. He is not stupid enough to show off that part of their secret sauce to people who could steal it (in the time frame he first showed the picture).

        From what I heard (being deep in the AR community), their headset does work and does its job well for a first generation system. It will be the content that makes or breaks it. Or it could be a big bust like VR for computer games back in 1990s compared to every cell phone just about being able to become a VR display now. Maybe this is the 90s for AR and in 20 years it will be ready.

        1. Killer app for AR is a 2D v-monitor app, if I can replace my 2 monitors with 7 virtual AR desktops with decent reolutuion all over my office, then a pair of high res lightweight AR glasses will sell like hotcakes.

    1. The best I can get from a quick google is that traditional displays are flat. Look to one corner and you’re still focusing on the same plane that everything else is at so nothing else on-screen blurs. This is unlike reality, where if you look at your finger in front of your face the background blurs as your eyes focus. If you can simulate that blur you could make VR more realistic. Or you could irritate a whole bunch of people. Who knows.

    2. You can simulate 3d by taking 2 photos side by side but that static viewpoint gets old pretty quick.
      If you move your head around you will need another 2 side by side images generated for your noggin’s new position.
      3D games can do this, but the rendering technology doesn’t quite take all those subtle photonic nuances in to account since the CPU/GPU overhead would be too high.

      This tech captures the rays of light from multiple cameras in the vertical axis (up/down) and reconstructs the light for your particular position.
      Mostly for looking up and down with 360 degrees of rotation.

      If this was used with the google streetmap car, one could travel down the road (in ~10 meter jumps) and experience a more real sense of being there. But that would take…a long time.
      Until we can do this for lots of lateral movement as well then, this tech is really only great for single bubbles in any given space.

      Light field technology is still in it’s infancy, i can’t wait to see where it goes. Although i have a strong feeling that the light field capture technology of the near future will look ridiculous and require insane amount of storage and processing capabilities.

      In a perfect world, a spherical diffraction grating surrounding a spherical imaging sensor with each pixel able to recording light from multiple directions would do the trick.
      Kind of like a fly’s eye. But *sigh* we’re quite a way off from anything like that.

    3. The problem is, the writeup is conflating two different things. A lightfield CAPTURE system that google’s got, which lets you record a scene in a way that you can observe in VR without “rendering,” and a lightfield DISPLAY, which allows you to look through it and see objects as if they were displayed deeper in the scene. The technologies have nothing to do with each other beyond the “lightfield” concept. It’s like conflating a laser and an automated license plate reader, because they’re both “photonic devices.”

  2. The last time I was this bored reading about light field technology was when I ‘experienced’ Magic Leap’s first teaser.

    Zero substance leaves an empty feeling in my head. ..like I’m on a mental diet.

    Maybe useful to mention how this is an improvement over the following setup demo’ed a while back:
    https://www.youtube.com/watch?v=pyJUg-ja0cg

    …or how it’s different than this tech:
    https://www.youtube.com/watch?v=rEMP3XEgnws

  3. Maybe google should just buy LITRO
    they developed special lightfield cameras.
    The consumer market seemed a little thin for such a thing, but now that VR and AR are pushing into the everyday im sure they could be used to great effect.

    I have a LITRO gen1 camera. Its a neat idea, but again the consumer applications for such a thing are meh at best

    1. Last I heard (admittedly a while back) was that Lytro was going to focus on lightfield cameras for image capture to be used for VR content rather than the consumer camera market.

    2. Different kind of lightfield. The Demo google is doing is for a volumetric lightfield. You can move your head around anywhere in the volume and see a 3d image. The LITRO lightfield is just for one 2d plane, but for that plane is very high resolution. You can adjust the viewpoint very slightly, pretty much by the size of the lens. The important part is that the lightfield resolution in the LITRO case is sufficient to choose your focal plane in post. I don’t think the volumetric lightfield google is demoing has sufficient data for that.

      1. A Plenoptic camera like the Lytro inherently captures the lightfield through its primary lens with minimal gaps. Google’s rig has major gaps and will require a lot of post processing and estimation in order to fill them as is. What Google should have done, imo, is not create an arc of cameras, but a half square instead and then not only rotate them, but also change the rig’s vertical position to eliminate the gaps between each camera. If they used an array of plenoptic cameras, the operation could be performed faster, as the vertical position could be dropped the entire height of the lens on each revolution rather than the smaller increments a standard camera would require.

  4. Am I right thinking that it’s all about making 360-degrees volumetric movies. In case of games the image can be processed pretty much in any arbitrary fashion. It would be interesting though if the headset could tell at which object the user looks in the game and adjust the image to replicate the experience more realistically.

Leave a Reply to russdillCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.