[Dick Tracy]’s Watch Gets Night Vision

After getting his hands on an Android-enabled wristwatch, [Paul] wanted to test the limits of his new hardware. We’ll assume he’s happy with his purchase because his finished build sends data from a Microsoft Kinect to his wristwatch, making it a night vision spy watch.

[Paul]’s new toy is a WIMM One Android wristwatch that comes complete with wi-fi and a copy of Android 2.1. To get night vision onto his watch, a Kinect on [Paul]’s desk streams depth data to his watch using OpenCV. The result is a camera that gathers depth data in the dark and sends it to [Paul]’s watch.

Whenever the movement of an intruder is detected, [Paul]’s watch vibrates and displays the depth image taken from the Kinect. If the intruder gets close to the Kinect, the face is picked up and also sent to the watch. To get the intruder out of the room, [Paul] can tap the face of his watch to turn on a remote alarm and sound an intruder alert. It’s a very neat project that would have been unimaginable a few years ago.

Replacing A Phantom Limb With A Kinect

Nearly everyone has heard of phantom limb syndrome. It occurs sometimes after a limb is amputated, but the mind of the patient still thinks that the limb is attached. Generally regarded as a mix-up in the wiring of the damaged nerves, a phantom limb can be very painful. [Ben] has been working on a way to alleviate some of the pain and frustration associated with a phantom limb and fortunately for us he went for a Kinect, VR goggles, and gyroscope build.

Today, most therapies for phantom limb syndrome use a Ramachandran Mirror Box. The theory behind the mirror box is pretty simple – if someone recently lost a hand, just insert one hand in one side of the box and the arm stump on the other side. Looking into the box from the side with the good hand will trick the patient’s brain into thinking the amputated hand is still there. It’s a good therapy that has been very successful, but [Ben] thought he could do something that is a little more immersive.

[Ben]’s project uses a Kinect and VR goggles to put the patient in a virtual environment. With the help of a few gyroscopes, the patient gets a virtual representation of their whole self projected into their goggles. The technique isn’t terribly different from VR phobia treatment, although there’s much more electronics and math involved in [Ben]’s build. The first test subject said his pain was going down, so it looks like he might have a success on his hand (no pun intended).

Check out the demos of [Ben]’s treatment plan after the break.

Continue reading “Replacing A Phantom Limb With A Kinect”

Kinect For Windows Released

Even though we’ve seen dozens of Kinect hacks over the years, there are a few problems with the Kinect hardware itself. The range of the Kinect sensor starts at three feet, a fact not conducive to 3D scanner builds. Also, it’s not possible to connect more than one Kinect to a single computer – something that would lead to builds we can barely imagine right now.

Fear not, because Microsoft just released the Kinect for Windows. Basically, it’s designed expressly for hacking. The Kinect for Windows can reliably ‘see’ objects 40 cm (16 in) away, and supports up to four Kinects connected to the same computer.

Microsoft set the price of Kinect for Windows at $250. This is a deal breaker for us – a new Kinect for XBox sells for around half that. If you’re able to convince Microsoft you’re a student, the price of the Kinect for Windows comes down to $150. That’s not too shabby if you compare the price to that of a new XBox Kinect.

We expect most of the builders out there have already picked up a Kinect or two from their local Craigslist or Gamestop. If you haven’t (and have the all-important educational discount), this might be the one to buy.

Real-time Depth Smoothing For The Kinect

[Karl] set out to improve the depth image that the Kinect camera is able to feed into a computer. He’s come up with a pre-processing package which smooths the depth data in real-time.

There are a few problems here, one is that the Kinect has a fairly low resolution, it is also depth limited to a range of about 8 meters from the device (an issue we hadn’t considered when looking at Kinect-based mapping solutions). But the drawbacks of those shortcomings can be mitigated by improving the data that it does collect. [Karl’s] approach is twofold: pixel filtering, and averaging of movement.

The pixel filtering works with the depth data to help clarify the outlines of objects. Weighted moving average is used to help reduce the amount of flickering areas rendered from frame to frame. [Karl] included a nice GUI with the code which lets you tweak the filter settings until they’re just right. See a demo of that interface in the clip after the break and let us know what you might use this for by leaving a comment.

Continue reading “Real-time Depth Smoothing For The Kinect”

Control Android With A Projector And Kinect

If you’re going to build a giant touch screen, why not use an OS that is designed for touch interfaces, like Android? [Colin] had the same idea, so he connected his phone to a projector and a Kinect.

Video is carried from [Colin]’s Galaxy Nexus to the projector via an MHL connection. Getting the Kinect to work was a little more challenging, though. The Kinect is connected to a PC running Simple Kinect Touch. The PC converts the data from the Kinect into TUIO commands that are received using TUIO for Android.

In order for the TUIO commands to be recognized as user input, [Colin] had to compile his own version of Android. It was a lot of work, but using an OS designed for touch interface seems much better than all the other touch screen hacks that start from the ground up.

You can check out [Colin]’s demo after the break. Sadly, there are no Angry Birds.

Continue reading “Control Android With A Projector And Kinect”

Nice Shoes, Wanna Recognize Some Input?

Even though giant multouch display tables have been around for a few years now we have yet to see them being used in the wild. While the barrier to entry for a Microsoft Surface is very high, one of the biggest problems in implementing a touch table is one of interaction; how exactly should the display interpret multiple commands from multiple users? [Stephan], [Christian], and [Patrick] came up with an interesting solution to sorting out who is touching where by having a computer look at shoes.

The system uses a Kinect mounted on the edge of a table to extract users from the depth images. From there, interaction on the display can be pinned to a specific user based on hand and arm orientation. As an added bonus the computer can also recognize users from their shoes. If a user is wearing a pair of shoes the computer recognizes, they’ll just walk up to the table and the software will recognize them.

Continue reading “Nice Shoes, Wanna Recognize Some Input?”

Augmented Reality Ex Nihilo

[David] sent in a nice project to demonstrate augmented reality with ARtoolkit and discuss the deep philosophical underpinnings of the meaning of nothingness. The good news is he was able to create a volume control button on a sheet of paper with a marker. The bad news is the philosophical treatment is a bit weak; [David] built something cool, so we’re able to let that slide for now.

This build was inspired by the Impromptu Sound Board made using a Kinect and a piece of paper. The idea behind the sound board is simple – draw some buttons on the paper, and use them to play short sound clips. [David] took this idea to make a small tutorial on augmented reality for Occam’s Razor.

The hardware is very simple – just a webcam, a piece of paper, and a marker. After [David] draws a large square on the paper, the code recognizes it as a volume control. Rotating the paper counterclockwise increases the volume, and clockwise turns the volume down. It’s a neat build to get into the foundations of augmented reality.

Check out the video demo of [David]’s build after the break.

Continue reading “Augmented Reality Ex Nihilo”