Giant POV tube for light painting

writing_in_the_night_with_the_light_scythe

When you really want your feelings known, we always say that bigger is better. [Gavin Smith, aka The Mechatronics Guy] must come from the same school of thought, because there’s absolutely no mistaking what he is trying to say with his latest project.

Inspired by this WiFi signal painter we featured a while back, the LightScythe is a 2 meter long bar composed of multi-color LED strips that he bought from Adafruit. The light bar is controlled by a Seeduino micro controller, which takes direction from his laptop via a pair of XBee units. Once he generates an image from text with ImageMagic, a Python script is used to match the colors as close as possible to the RGB color space. The image is then converted to raw serial data for playback on the Scythe. When he is ready to go, he triggers his camera to take a 10-15 second exposure, during which he walks across the frame, painting his images with the LightScythe.

We always enjoy seeing creative derivations of previous projects we have covered, and the LightScythe does it well. He actually built a pair of these that can work in concert or independently, which we imagine can make for some pretty awesome pictures.

Be sure to check out his Flickr photostream for more examples of what the LightScythe can do.

Comments

  1. ehntoo says:

    You may consider this pedantic, but as with the story from a short while back with a “POV dog”, this is not a persistence of vision device. You would not get anything like this effect with your vision.

    C’mon, guys.

  2. TheCreator says:

    I wonder how long it takes him to get that perfect picture. There is no accelerometers to measure the movement like a traditional POV. he says..

    “The key to shots is repeatable starting, and being able to pace out steps regularly ”

    So i assume that he has to master the speed of his walking to get the lights to change color at just the right time for the exposure to come out correctly.

    possibly using some accelerometers to detect the foot steps. Or even use something like a static ping / laser proximity sensor off frame that can calculate the speed at which you are approaching it.

    Otherwise very cool project, I am a big fan of time lapse photography.

  3. Scanners w/o boxes says:

    Ok, so all that’s missing to completely automate this is either a way to spin the 2m rod or a free standing “cylon eye” hack w/ script tie-in, perhaps using chdk ( http://en.wikipedia.org/wiki/DIGIC ) and a cannon camera.
    Guess the ‘high value’, raise the bar so to speak, hack would be to get the 2m bar of high enough quality (pre/post processing perhaps) that one could “scan the skyline” in behind the object in a photo.

  4. steve says:

    I see no point. While small POVs can be used for a visual effect, this thing is so big that its only use is in photos. And there one could just use Photoshop. Therefore: fail

  5. Scanners w/o boxes says:

    > this thing is so big that its only use is in
    > photos. And there one could just use Photoshop. > Therefore: fail
    On the website, it is noted that you can chain these, so in theory one could construct one’s own portable LED display (with enought 2m rods).

  6. xeracy says:

    @steve –

    “this thing is so big that its only use is in photos” — yup, that’s what he says it for!

    “And there one could just use Photoshop” — im sorry, was that an airplane or just the point that flew over your head? *wooooooossssshhhh*

    “Therefore: fail” — no… just no…

    lets see one of your projects…

  7. TheCreator says:

    Do you have some example photo shops Steve that offer both transparent text or images while also providing dynamic lighting translation onto near by physical objects?

    I’ll just let you work on that for a couple of hours and we will see how the results play out for you.

  8. birdmun says:

    I wonder how long it is before he is approached by ad agencies.

  9. steve says:

    >both transparent text or images while also >providing dynamic lighting translation onto near >by physical objects

    O yeah right, lets do it for the “lighting translation onto nearby physical objects”. lulz

  10. Gav says:

    @TheCreator,

    Good point. My original design was going to have everything contained in the staff, with accelerometers included. Along the way, however the hardware started getting too big, so I offloaded it into a lasercut wooden box slung over my shoulder. Since that doesn’t move with the staff, I left the sensors out.

    The main issue is remotely starting the text when you’re in a good position. Now the process is pretty easy, set the camera on 10 second timer, push the button and get into the shot. When you see the shutter open start moving and press the ‘go’ button on the scythe control. Main issue is walking steadily and predictably so that you end at the other end of the frame when you ‘run out’ of text. The text seems to look good no matter what speed you walk at, it’s just a matter of fitting it nicely in the camera frame.

    I was considering using IR LEDs and WiiMotes, or similar, to track the angular position of the walker and adjust the scythe output accordingly, but it’s a lot of extra effort without much return.

    Also I found that in the field there was a lot of radio noise, so we had comms dropouts and the scythe freezing. I’d hate to introduce another dependency on communications mediums :)

  11. xeracy says:

    @steve – really? you are the most short-sighted commenter I’ve ever seen post on HaD… Troll much?

  12. Ryan says:

    As has been said, a photoshop would be nontrivial in the cases where this would be used.

    TL;DR steve got owned.

  13. hammy says:

    Fantastic! I’m studying mechatronic engineering literally 7km from where the main picture was taken!

  14. Gav says:

    @Hammy, you should come by and check us out, then :)

    http://robodino.org/

  15. Mad Max says:

    I wonder if the two ends of the scythe could be “marked” – either using IR LEDs not visible to the camera, or by some sort of non-luminous (but distinctive shaped) tips – then have a webcam attached to the laptop (located right beside the real photo camera) look for them, calculate what the scythe should be displaying along the line that joins the identified tips, and send that over to the scythe in real time (therefore making the painted image independent of the actual motion of painting).

    I’m pretty sure the radio link would not prevent this – at the short distances we’re talking about here, a properly implemented RF link has to be able to handle pretty much any street-level interference.

  16. twopartepoxy says:

    While I think that Steve’s verdict is probably a bit on the harsh side, and the device is quite interesting and the effort should be applauded (of course), I think that most of the examples shown could actually be simulated with photoshop type software quite easily, as the reflected light is a ‘straighforward’ reflection onto a uniform planar surface. This is a case of cut/paste/skew etc. etc. not totally straightforward but not rocket-science. what would be good though is to see the light reflected onto something with a bit of shape to it (maybe those examples are there but i didn’t see them), stuff that would be hard to simulate.

  17. MrX says:

    @TheCreator
    Err.. Accelerometers track err.. acceleration. You can’t know walking velocity with just an accelerometer…

  18. MrX says:

    People, think! Accelerometers (alone) are useless in this case. They are only useful in small persistence of camera/vision devices that need to be shaken to produce the effect. In that case the accelerometers are there to detect the shaking frequency! So I believe this is where all of the confusion is coming.

  19. Franklyn says:

    Well you dont need to spin it , you could just swing it like a pendulum. I guess you would definitely need the accelerometers though.

  20. Franklyn says:

    reconstructing the image above in photoshop would actually be quite trivial. You mirror the text , blur it and apply it as an overlay on the ground below it to simulate light bounces. It would get slightly more complex from different angles and with more objects around but it definitely doable.

  21. Franklyn says:

    BUT THAT DOESNT TAKE AWAY FROM HOW AWESOME THIS IS ! .

  22. Scanners w/o boxes says:

    > The main issue is remotely starting the text
    > when you’re in a good position. Now the process
    > pretty easy, set the camera on 10 second
    > timer, push the button and get into the shot
    Hence the prior suggestion of using chdk.
    Define script to look for motion in a given area of photo to trigger “10 second timer” && using ptp protocol start the “rod”. Ideally, could also set up a grid to look for the “rod” end point colors, so that each advance of the rod in the “motion grid” updates the rod. Not as “complex” as the wii, but does limit the camera to a Cannon && alot easier to program (ptp not withstanding)

  23. Scanners w/o boxes says:

    More general use suggestion would be to use PiDiP (ydegoyon.free.fr/pid ip.html) in place of CHDK.

  24. facefart says:

    CHDK is great but worthless if he happens to use a Nikon.

  25. reboots says:

    I see comments arguing about minor photography details but nobody, including the original author, thought to hang this thing off of a car or bicycle for epic POV assault? Come on, people!

  26. TheCreator says:

    @Mr.X

    I didn’t say that you could get walking velocity from the accelerometers. I said you can use it to detect the foot steps.

    i’m sure it would be a pain in the ass. However, you could calculate the average distance between steps and use this as a “checkpoint” that can take your distance and compare it to the estimated distance needed to complete the output through the staff. obviously other things would come into play like your distance from the camera.

    The other idea was to use laser/ping proximity sensors. Set it just outside of the capture area of the camera. read in the distance of the starting point (left side of frame) as you walk the proximity sensor will update your distance and you can set the refresh rate of the staff to end the display at a given distance from the left side of the captured image.

  27. TxPilot says:

    Very nice work. Seems a little more complex than the one I made about a year and half ago using an Arduino and the same HL1606 strip.

  28. bigbrother says:

    actually, the light bar is controlled by a Seeedduino board, be aware of there are triple “e” inside, not “seeduino” with only two

  29. wardy says:

    They should attach these things to all racing cars and have them paint patterns all over the race track…

    At racing car speeds the human eye WOULD register a persistent image, at least briefly. That would be awesome to watch. Advertising might become fun to look at!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 97,541 other followers