New Camera Does Realtime Holographic Capture, No Coherent Light Required

Holography is about capturing 3D data from a scene, and being able to reconstruct that scene — preferably in high fidelity. Holography is not a new idea, but engaging in it is not exactly a point-and-shoot affair. One needs coherent light for a start, and it generally only gets touchier from there. But now researchers describe a new kind of holographic camera that can capture a scene better and faster than ever. How much better? The camera goes from scene capture to reconstructed output in under 30 milliseconds, and does it using plain old incoherent light.

The camera and liquid lens is tiny. Together with the computation back end, they can make a holographic capture of a scene in under 30 milliseconds.

The new camera is a two-part affair: acquisition, and calculation. Acquisition consists of a camera with a custom electrically-driven liquid lens design that captures a focal stack of a scene within 15 ms. The back end is a deep learning neural network system (FS-Net) which accepts the camera data and computes a high-fidelity RGB hologram of the scene in about 13 ms.  How good are the results? They beat other methods, and reconstruction of the scene using the data looks really, really good.

One might wonder what makes this different from, say, a 3D scene captured by a stereoscopic camera, or with an RGB depth camera (like the now-discontinued Intel RealSense). Those methods capture 2D imagery from a single perspective, combined with depth data to give an understanding of a scene’s physical layout.

Holography by contrast captures a scene’s wavefront information, which is to say it captures not just where light is coming from, but how it bends and interferes. This information can be used to optically reconstruct a scene in a way data from other sources cannot; for example allowing one to shift perspective and focus.

Being able to capture holographic data in such a way significantly lowers the bar for development and experimentation in holography — something that’s traditionally been tricky to pull off for the home gamer.

10 thoughts on “New Camera Does Realtime Holographic Capture, No Coherent Light Required

  1. All these camera posts are giving HaD a real Gernsbackian feel and I like it. I remember being amazed by holograms as a child. It’s cool seeing progress like this being made on the technology.

  2. Real-time acquisition and focus stacking is what enables fully sharp macro views at odd angles, as is typical for inspection during reflow. Really looking forward to that, 3D or not.

  3. So it’s a video camera with a lens that does focus stacking in a fraction of a second. Stir in some AI, and ATAMO (And Then A Miracle Occurs)! It produces a “hologram”!

    Aaaand… we’re back to the discussion of what a hologram really is and why this isn’t just called a light field camera.

  4. A while back I saw a video of a demo of a holographic display, not just a camera — this was a real hologram, producing a complete wavefront. It worked by having a big lump of transparent stuff, quartz maybe, surrounded by piezoelectric actuators. By producing the right sonic waveform within the glass it could induce it to bend laser light in just the right way to reproduce the interference patterns in a real hologram, except this could be changed in real time, so producing moving holograms.

    It was connected to a very large computer which was just about capable of doing the real-time computations to render a spinning cube. I wonder what happened to this?

  5. Any other 3D reconstruction technique mentioned here (stereoscopic, time of flight, even old style kinect structured light projection) can also provide shifted perspectives just as well as this can. This camera with a fancy lens also has just as much trouble with occlusions as them due to having a single viewpoint. (the exception perhaps being wide-baseline sterioscopic and multiscopic cameras, being literally multiple viewpoints with guaranteed distance between them)

    This isn’t even a very good implementation of this technique. There’s a temporal element as the camera effectively rotates through multiple different lenses one by one, meaning if the camera is moving or the scene is dynamic it’ll hurt reconstruction. I remember an alternative approach to this holographic reconstruction a while back which used simultaneous capture with multiple cameras with different focal lengths and generated the same wavefront information (and same ability to change focal lengths later) but without the motion problems.

    1. 15 Ms is 66 fps. Maybe not great for fast moving objects, but perfectly suitable for slow moving objects such as a object on a roasting platform to further generate a full 360 degree scene model, minus interior occlusions. Just because it isn’t perfect doesn’t mean it’s useless.

  6. Who were the researchers? The article says the results are good, maybe it could show them? Seems some basic information is missing, no links names or other information for me to research further

  7. The capture time of 15ms got me wondering what sensor they were using. It’s noted as a Sony model in the description of the experimental setup. The exact model isn’t mentioned, but they all seem to top out at 60 FPS.
    The paper seems to describe capturing only two depth levels to stack, so I think they may be only capturing two frames – they mention that adding more depth levels is possible, but would slow things down. Not to say you couldn’t achieve it, but the more frames you capture you either lose light available to each image or lower the overall frame rate.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.