Machined Steadicam, Steadier Than The Rest

No, the picture above is not a store made steadicam. Rather, a CNC machined one by [Matt]. Interestingly, unlike most steadicams we’ve seen before the gimbal is not the main focus of the design though an aluminum machined gimbal would make us drool. The central idea is allowing for X and Y axis adjustment to get oddly weighted bulky camera’s exact center of gravity. [Matt’s] steadicam is also designed to handle more weight than commercial versions, and (if you already have a CNC) to be much cheaper. There’s no video, but from the skill of craftsmanship we can safely assume it’s as good and level as some of the best.

Augmented Reality UAV Controller

Controlling a long-range unmanned aerial vehicle is much easier if you have an augmented reality system like [Fabien Blanc-Paques] built. On board the aircraft you’ll find a sensor suite and camera, both transmitting data back to the operator. As the title of this post indicates, the display the operator sees is augmented with this data, including altitude, speed, and a variety of super-handy information. For instance, if you get disoriented during a flight there’s an arrow that points back to home. There’s also critical information like how many milliamp-hours have been used so that you can avoid running out of juice, and GPS data that can be used to locate a downed aircraft. Check out some flight video after the break.

Continue reading “Augmented Reality UAV Controller”

Stylin’ HMD

Watch out, these sunglasses are actually a head mounted display. [Staffan] says he’s wanted dataglasses since ’95, but whats currently out there makes the user look ridiculous, and we have to agree. While his forum posts are a little lacking in detail, he’s promised us more info soon. And for now lets us know at least the resolution, well sort of: Its either 480×1280 or 480x427x3, you can be the judge. Update: [Staffan] has clarified “The resolution is 480*1280 true pixels. It is accomplished by spanning the screen across two Kopin CyberDisplay VGA modules.”

Regardless, [Staffan] is looking for help perfecting the glasses, with what in particular we’re not sure, but the project looks promising and we hope he keeps up the good work.

Multi-layer Display Uses Water Instead Of Screen

This multi-layer display uses droplets of water as a projection medium. This way, several different projected areas can be seen for a not-quite-3D layering effect. The trick is in syncing up all aspects of the apparatus. There are three manifolds, each with 50 stainless steel needles for water drop production. A solenoid valve actuates the drops, a camera images them mid-air, and a computer syncs the images of the dots with a projector. In the video after the break you can see the SIGGRAPH 2010 presentation that includes a description of the process as well as action shots including a 3-layer version of Tetris.

Continue reading “Multi-layer Display Uses Water Instead Of Screen”

Now You See Me, Now You Don’t, Face Detection Scripts

Straight out of Ghost in the Shell, the Laughing Man makes his appearance in these security camera shots. [William Riggins] wrote us to let us know about his teams Famicam scripts. After taking a screen shot, faces are detected and counted, ‘anonymized’, and the final image is uploaded to Twitter.

The process is rather simple, and sure beats wearing a bunch of white reflective camouflage. All that’s left is detecting specific faces to make anonymous, and of course uploading the script to every camera in the world. Easy, right?

Unwrapping 360 Degree Video

[Golan Levin] found a way to unwrap the 360 degree images he created with his camera. He’s using a Sony Bloggie HD camera which comes with a 360 degree attachment for the lens. This produces a donut shaped image (seen in the upper left) that was not all that palatable to [Golan]. He used Processing and openFrameworks to create a program that lets him unwrap the donut into a flat image, or create a ring of video where the viewer is at the center and can scroll left or right to see the rest of the filmed environment. He released the source so you can adapt the program if you’re using a different 360 video setup.

[Thanks Kyle]

Human Tetris: Object Tracking On An 8-bit Microcontroller

Elaborating on an item previously mentioned among last weekend’s Cornell final projects list, this time with video:

For their ECE final project, [Adam Papamarcos] and [Kerran Flanagan] implemented a real-time video object tracking system centered around an ATmega644 8-bit microcontroller. Their board ingests an NTSC video camera feed, samples frames at a coarse 39×60 pixel resolution (sufficient for simple games), processes the input to recognize objects and then drives a TV output using the OSD display chip from a video camera (this chip also recognizes the horizontal and vertical sync pulses from the input video signal, which the CPU uses to synchronize the digitizing step). Pretty amazing work all around.

Sometimes clever projects online are scant on information…but as this is their final grade, they’ve left no detail to speculation. Along with a great explanation of the system and its specific challenges, there’s complete source code, schematics, a parts list, the whole nine yards. Come on, guys! You’re making the rest of us look bad… Videos after the break…

[G’day Bruce]

Continue reading “Human Tetris: Object Tracking On An 8-bit Microcontroller”