Bottle Rocket POV Video

pov_fireworks_video

It’s a holiday weekend, and much like you, we’re taking a bit of time to relax and kick back a few drinks while we mingle with friends and family. Obviously, one of the bigger events this weekend plays host to is the fireworks show put on by your city or your drunken neighbors.

Roman candle wars aside, have you ever wondered what the 4th of July looked like from the fireworks’ point of view? We did, and so did [Jeremiah Warren], who put together an awesome video showing what really happens after you light the fuse and run away like a little girl.

The dizzying video was shot using a pair of key chain cameras that he strapped directly to the rockets before launching. It’s pretty entertaining, so be sure to check it out if you have a few minutes to spare.

This probably doesn’t quite fit the criteria to be considered a hack, but with explosions and the crazy point of view video, we had to pass it along.

Continue reading “Bottle Rocket POV Video”

Amazing 3d Telepresence System

encumberance_free_telepresence_kinect

It looks like the world of Kinect hacks is about to get a bit more interesting.

While many of the Kinect-based projects we see use one or two units, this 3D telepresence system developed by UNC Chapel Hill student [Andrew Maimone] under the guidance of [Henry Fuchs] has them all beat.

The setup uses up to four Kinect sensors in a single endpoint, capturing images from various angles before they are processed using GPU-accelerated filters. The video captured by the cameras is processed in a series of steps, filling holes and adjusting colors to create a mesh image. Once the video streams have been processed, they are overlaid with one another to form a complete 3D image.

The result is an awesome real-time 3D rendering of the subject and surrounding room that reminds us of this papercraft costume. The 3D video can be viewed at a remote station which uses a Kinect sensor to track your eye movements, altering the video feed’s perspective accordingly. The telepresence system also offers the ability to add in non-existent objects, making it a great tool for remote technology demonstrations and the like.

Check out the video below to see a thorough walkthrough of this 3D telepresence system.

Continue reading “Amazing 3d Telepresence System”

Synkie: The Modular Synth For Video

The folks at [anyma] have been working on an analog video processor called Synkie for a while now, and we’re amazed a project this awesome has passed us by for so long.

Like a Moog or Doepfer synth, the Synkie was developed with modularity in mind. So far, [anyma] has built modules to split and combine the sync and video signals, and modules to invert, add, subtract, mix, filter and amplify those signals. The end result of all this video processing produces an output that can look like a glitched Atari, art installation, and scrambled cable station all at the same time.

The Synkie’s output reminds us of the original Doctor Who title sequence, and actually this idea isn’t far off the mark – both use video feedback that will produce anything from a phantasmagoric ‘flying through space’ aesthetic to a fractal Droste effect visualization. We’re impressed with Synkie’s capabilities, but we’re astounded by the [anyma] crew’s ability to control a video signal in real time to get what they want.

Check out a video of the Synkie after the jump. There’s also more footage of the Synkie in action on the Synkie Vimeo channel.

Continue reading “Synkie: The Modular Synth For Video”

rc_car

Racing Wheel Guided R/C Car With Video Feed

Instructables user [Kaeru no Ojisan] enjoys constructing R/C kit cars and wanted to build one that could be driven using a PC racing wheel he had on hand. Not satisfied with simply guiding it with the racing wheel, he added a web cam to the car so that he can monitor its location from the comfort of his desk chair.

The car is loaded down with all sorts of electronics to get the job done, requiring four separate battery packs to keep them online. An Arduino controls the motor and the steering servos, receiving its commands wirelessly via a Bluetooth add-on. The camera connects to a USB to Ethernet converter, which enables the car’s video feed to be transmitted via the onboard wireless router.

The racing wheel interface seems to work just fine, though we don’t doubt that the whole setup can be easily simplified, reducing both weight and battery count. While [Kaeru no Ojisan] says that the car is in its concept stages and there are a few bugs to work out, we think it’s a good start.

Stick around to see a quick video of the car in testing.

Continue reading “Racing Wheel Guided R/C Car With Video Feed”

Real-time Digital Puppetry

digital_puppet_show

If it sometimes seems that there is only a finite amount of things you can do with your kids, have you ever considered making movies? We don’t mean taking home videos – we’re talking about making actual movies where your kids can orchestrate the action and be the indirect stars of the show.

Maker [Friedrich Kirchner] has been working on an application called MovieSandbox, which is an open-source realtime animation tool. A couple of years in the making, the project is cross-platform compatible on both Windows and Apple computers (with Linux in the works), making it accessible to just about everyone.

His most recent example of the software’s power is a simple digital puppet show, which is sure to please young and old alike. Using sock puppets fitted with special flex sensors, he is able to control his on-screen cartoon characters by simply moving his puppets’ “mouths”. An Arduino is used to pass the sensor data to his software, while also allowing him to dynamically switch camera angles with a series of buttons.

Obviously something like this requires a bit of configuration in advance, but given a bit of time we imagine it would be pretty easy to set up a digital puppet stage that will keep your kids happily occupied for hours on end.

Continue reading to see a quick video of his sock puppet theater in action.

[via Make]

Continue reading “Real-time Digital Puppetry”

Camera Software Learns To Pick You Out Of A Crowd

tld_object_tracking

While the Kinect is great at tracking gross body movements and discerning what part of a person’s skeleton is moving in front of the camera, the device most definitely has its shortfalls. For instance, facial recognition is quite limited, and we’re guessing that it couldn’t easily track an individual’s eye throughout the room.

No, for tracking like that, you would need something far more robust. Under the guidance of [Krystian Mikolajczyk and Jiri Matas], PhD student [Zdenek Kalal] has been working on a piece of software called TLD, which has some pretty amazing capabilities. The software uses almost any computer-connected camera to simultaneously Track an object, Learn its appearance, and Detect the object whenever it appears in the video stream. The software is so effective as you can see in the video below, that it has been dubbed “Predator”.

Once he has chosen an object within the camera’s field of vision, the software monitors that object, learning more and more about how it looks under different conditions. The software’s learning abilities allow it to pick out individual facial features, follow moving objects in video, and can recognize an individual’s face amid a collection of others.

While the software can currently only track one object at a time, we imagine that with some additional development and computing horsepower, this technology will become even more amazing.

Continue reading “Camera Software Learns To Pick You Out Of A Crowd”

Super Refined Kinect Physics Demo

kinect_demo

Since the Kinect has become so popular among hackers, [Brad Simpson] over at IDEO Labs finally purchased one for their office and immediately got to tinkering. In about 2-3 hours time, he put together a pretty cool physics demo showing off some of the Kinect’s abilities.

Rather than using rough skeleton measurements like most hacks we have seen, he paid careful attention to the software side of things. Starting off using the Kinect’s full resolution (something not everybody does) [Brad] took the data and manipulated it quite a bit before creating the video embedded below. Skeleton data was collected and run through several iterations of a smoothing algorithm to substantially reduce the noise surrounding the resulting outline.

The final product is quite a bit different than the Kinect videos we are used to seeing, and it drastically improves how the user is able to interact with virtual objects added to the environment. As you may have noticed, the blocks that were added to the video never rarely penetrate the outline of the individual in the movie. This isn’t due to some sort of digital trickery – [Brad] was able to prevent the intersection of different objects via his tweaking of the Kinect data feed.

We’re not sure how much computing power this whole setup requires, but the code is available from their Google Code repository, so we hope to see other projects refined by utilizing the techniques shown off here.

[via KinectHacks]

Continue reading “Super Refined Kinect Physics Demo”