Christmas POV Display Makes Viewer Do The Work

Hackaday readers have certainly seen more than a few persistence of vision (POV) displays at this point, which usually take the form of a spinning LED array which needs to run up to a certain speed before the message becomes visible. The idea is that the LEDs rapidly blink out a part of the overall image, and when they get spinning fast enough your brain stitches the image together into something legible. It’s a fairly simple effect to pull off, but can look pretty neat if well executed.

But [Andy Doswell] has recently taken an interesting alternate approach to this common technique. Rather than an array of LEDs that spin or rock back and forth in front of the viewer, his version of the display doesn’t move at all. Instead it has the viewer do the work, truly making it the “Chad” of POV displays. As the viewer moves in front of the array, either on foot or in a vehicle, they’ll receive the appropriate Yuletide greeting.

In a blog post, [Andy] gives some high level details on the build. Made up of an Arduino, eight LEDs, and the appropriate current limiting resistors on a scrap piece of perfboard; the display is stuck on his window frame so anyone passing by the house can see it.

On the software side, the code is really an exercise in minimalism. The majority of the file is the static values for the LED states stored in an array, and the code simply loops through the array using PORTD to set the states of all eight digital pins at once. The simplicity of the code is another advantage of having the meatbag human viewer figure out the appropriate movement speed on their own.

This isn’t the only POV display we’ve seen with an interesting “hook” recently, proving there’s still room for innovation with the technology. A POV display that fits into a pen is certainly a solid piece of engineering, and there’s little debate the Dr Strange-style spellcaster is one of the coolest things anyone has ever seen. And don’t forget Dog-POV which estimates speed of travel by persisting different images.

[Thanks to Ian for the tip.]

12 thoughts on “Christmas POV Display Makes Viewer Do The Work

  1. Let’s do the time warp back to 1990 and remember the Private Eye display. It used this principle along with a tiny mirror to create a clear crisp image of many, many millions of dollars pouring down a drain.

  2. I think I’ve seen a similar principle used in a subway to display adverts. Instead of a single column, there are many columns, each displaying a frame of a video. which, when passing by the windows, give the impression of a floating video screen moving along with the train.

    I guess that if the POV isn’t framed in some window passing by, the effect is a lot better on video than in real life because it would be hard not to unconsciously keep the display in the center of vision.

  3. How does that work? In normal POV with the LEDs moving, some kind of synchronization is accomplished using a hall sensor or similar to turn turn the lights on at the right time. But when the POV unit is stationary, how does it know when to present the next column of pixels? Does it require the person to move at a particular speed?

  4. Any idea what timing would be needed to make the message legible only to passing cars at 20MPH? I am sure there is math involved. I am guessing around 7 times the speed of a walking message if you figure walking at 3MPH.

  5. I played with this for about 2 hours one night, using an arduino nano iso the mini which was used in the project, and i could not get it to display anything unless shaking my head like a loon. Lost interest after the headaches started.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.