Spinning 3D POV Display: A High School Term Project

If you are a fan of sci-fi shows you’ll be used to volumetric 3D displays as something that’s going to be really awesome at some distant point in the future. It’s been about forty years since a virtual 3D [Princess Leia] was projected to Star Wars fans from [R2D2]’s not-quite-a-belly-button, while in the real world it’s still a technology with some way to go. We’ve seen LED cubes, spinning arrays, and lasers projected onto spinning disks, but nothing yet to give us that Wow! signaling that the technology has truly arrived.

We are starting to see these displays move from the high-end research lab into the realm of hackers and makers though, and the project we have for you here is a fantastic example. [Balduin Dettling] has created a spinning LED display using multiple sticks of addressable LEDs mounted on a rotor, and driven by a Teensy 3.1. What makes this all the more remarkable is that he’s a secondary school student at a Gymnasium school in Germany (think British grammar school or American prep school).

volumetric-pov-display-built-by-high-schooler-led-boardsThere are 480 LEDs in his display, and he addresses them through TLC5927 shift registers. Synchronisation is provided by a Hall-effect sensor and magnet to detect the start of each rotation, and the Teensy adjusts its pixel rate based on that timing. He’s provided extremely comprehensive documentation with code and construction details in the GitHub repository, including a whitepaper in English worth digging into. He also posted the two videos we’ve given you below the break.

What were you building in High School? Did it involve circuit design, mechanical fabrication, firmware, and documentation? This is an impressive set of skills for such a young hacker, and the type of education we like to see available to those interested in a career in engineering.

We’ve shown you one or two volumetric 3D displays before, and it would be nice to think that this one might encourage some more to be created. To pique your interest, here’s one with a spinning helix, and another similar to the one here using LED sticks.

25 thoughts on “Spinning 3D POV Display: A High School Term Project

  1. Well I did design one in high school also…. I’ve been waiting ever since for a few hungred gigabit link and a GPU that could actually take advantage of it… Because gigapixels.

    1. yeah, I’ll say!
      To balance something you usually need to spin it – they have that covered. Then you need a rotational positional display – they have that covered to. Then you need some feedback that is proportional to the balance offset – my ears are doing that so perhaps a microphone. The rest is just a blob of hot melt glue.

      Kewl to see kids doing this though. In my state they just made it compulsory for all kids to do coding, so we will go from some kids doing coding because they enjoy it to coding being universally unpopular because it a compulsory activity.

      1. Yup, that sounds about right. :-) I’d add using an airtight container and remove as much air as possible. That would stop a lot of the noise but might introduce thermal issues. You may as well take the air our since this design demands a sturdy enclosure for safety anyway. :-) A bell jar sounds about right but I’d want it made of lexan rather than glass. I’ve been looking for an excuse to get a good vacuum pump. :-)

  2. I’m curious as to what exactly we’re supposed to be using to drive these things once we come up with something at a decent resolution.

    Consider even just the starting-to-become-obsolete “full HD” 1920×1080 resolution. Just dragging that into three dimensions, to a resolution of 1920x1920x1080 (1920x1080x1920?) gives you an absolutely ridiculous 3,981,312,000 pixels. Close enough to 4Gpix to call it so. At 60FPS and 8-bit color (which is becoming quite middling in the days of 144Hz displays and 10-bit color), you’re looking at an insane requirement of ~2.3 TERABITS of throughput to drive the thing. Technically, you could do this with something like 25 channels of single-mode fiber, but the hardware requirements just to TRANSMIT that kind of data is nuts, never mind what’s needed to process or, god forbid, generate it on-the-fly.

    Even if you could reliably compress the stream to just 10% of its original size (you can’t), that’s still nuts, and that adds even more hardware on each end.

    1. Correction:

      My math was wrong. Those figures actually add up to just under 5.7Tbps.

      And, now that I think about it, there’s also the issue that, even at an impossible 99% compression ratio, you’d fill up an entire 1TB hard drive (as if a single drive could even be read at the required 57Gbps, lol) with less than 2.5 minutes of video.

    2. Actually, there would be lots of compression. Most objects are solid, and there’s also lots of empty space in most scenes. So that means there will be lots of zero pixel values. Worst case is a scene with lots of special effects such as fog and glowing light beams.

      Look at it another way. Think about a typical 3D scene projected onto a 2D screen. What’s missing to make it fully 3D? One is the depth information, and two is the collection of hidden surfaces that are behind what you see. This information is not 1000 times more than the 2D scene. It’s closer to 3x-5x.

      1. >worst case is a scene with

        And that’s the problem. This isn’t streaming video from a website. This is streaming video to your DISPLAY. Lossy compression at any point in the data stream is unacceptable, so you HAVE to design around the worst-case scenario. That worst-case scenario is always going to be every single voxel changing state from the last frame. Unless you’re willing to accept a lossy stream from all sources, you need the full 5.7Tbps of throughput.

        Compression doesn’t help with driving the individual elements of the display, either. No matter what you do, they will ultimately and always require the full, uncompressed video stream from whatever processor is handing managing the inputs and the rest of the device, one way or another.

        I suppose the voxels could somehow be driven asynchronously, which potentially alleviates a lot of that load most of the time, but, again…the worst-case dictates the design requirements, not the average case.

        1. And, geeze, now that I think about it even further, there’s another kind of color information that you’d need that isn’t even applicable to modern displays: Transparency/opacity.

          Required bandwidth for 1920x1920x1080 at 60FPS and 10-bit color/opacity totals 7,644,119,040,000bps.

          Somehow I don’t think true volumetric displays are going to be a thing for a while, between the lack of workable mechanism and insane throughput requirements.

        2. You’re thinking that updating would have to be done the same way a 2D image is by sending every pixel. That’s not the way you’d update a sparse data set. Even doing some simple like sending out the address for each lit pixel might be more efficient than sending out every pixel.

        3. Regarding the “worst-case scenario”: You don’t have to make a display that can handle every possible case. It’s perfectly reasonable to make a display that can handle only a limited number of surfaces. As long as it displays what you want to display, it doesn’t matter that it can’t display a scene with every pixel a different color.

        4. I don’t know why you say compression is unacceptable. Everything is already compressed for the current generation and they’re not complaining. MP3 for audio, H.265 for free to air TV etc.

          I think the solution will be to use a high CPU resource hungry compression that makes it easier for the decoding device.

          Perhaps custom (or standardized) drivers that sends HDL information to pre-configure the decoder which would effectively be a GPU structured in FPGA.

          That would keep the high bandwidth signal path short enough to actually be able to work. Getting the data to the GPU would probably have to be optical.

          Perhaps tomorrows engineer will have to ‘blink that LASER LED’. lol

          1. Sure, HDMI isn’t compressed but try to get HDMI further than half way across your lounge room, let alone the internet without some form of compression along the way.

            Our current internet can’t support real time HDMI. Our current media storage devices can’t support real time HDMI.

            The only thing that currently supports real time HDMI is the 2 meter cable that goes between the decompression codec and the display device.

    3. Actually, there is more to consider than what I just said. The reason that volumetric displays like this aren’t the be-all of 3D displays is that they can’t control the direction that light is sent out from each point in space. This is necessary for various reasons: to be able to see the front side of something and not the back, and to be able to see reflections (and specular lighting) correctly.

      A “real” holographic display only needs to be 2D. However, each point on the display surface needs to be able to control the light emitted in all directions. In terms of data, what that means is that each point includes an images’ worth of data for all the light emitted in all directions of a hemisphere. This does imply an increase in the amount of data by orders of magnitude if you want to truly represent the behavior of light for a 3D scene.

      For now, that’s unattainable. You could perhaps compute the light for a certain subset of the directions. (The normal rendering is only for the direction pointed towards a single eye-point.)

      1. Actually that is the difference between real holography and what people commonly call “holograms” or “holographic”, but not produced by the holographic process. Unfortunately, 99.9% of hits on Google for hologram are things that have nothing to do with holography, mostly various variants of Pepper’s ghost or some CGI “floating in the air”.

        Holograms are 2D surface, but they do record the entire lightfield – both the intensity and phase of the light at every point. That is sufficient to reconstruct the 3D image.

        Explanation of how this works is here:
        https://en.wikipedia.org/wiki/Holography

        And a wonderful example is here:
        https://www.youtube.com/watch?v=RrGR-f1VNHI

        Apparently the company Zebra Imaging has both holographic “printers” – capable of printing out a hologram of a 3D scene onto a plate (starting at $1million …):
        http://www.zebraimaging.com/products/holographic-printers

        And also a holographic 3D display in development:
        http://www.zebraimaging.com/products/motion-displays

        The amounts of data required to drive one of these is staggering, so don’t expect watching movies on something like this any time soon …

Leave a Reply to foxxpupCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.