Control Your House By Moving Your Arms Like You’re Directing Traffic

This home automation project lets you flap your arms to turn things on and off. [Toon] and [Jiang] have been working on the concept as part of their Master’s thesis at University. It uses a 3D camera with some custom software to pick up your gestures. What we really like is the laser pointer which provides feedback. You can see a red dot on the wall which followers where ever he points. Each controllable device has a special area to which the dot will snap when the user is pointing close to it. By raising his other arm the selected object can be turned on or off.

Take a look at the two videos after the break to get a good overview of the concept. We’d love to see some type of laser projector used instead of just a single dot. This way you could have a pop-up menu system. Imagine getting a virtual remote control on the wall for skipping to the next audio track, adjusting the volume, or changing the TV channel.

Continue reading “Control Your House By Moving Your Arms Like You’re Directing Traffic”

Building A Better Kinect With A… Pager Motor?

Fresh from Microsoft Research is an ingenious way to reduce interference and decrease the error in a Kinect. Bonus: the technique only requires a motor with an offset weight, or just an oversized version of the vibration motor found in a pager.

Being the first of its kind of commodity 3D depth sensors, the tracking on a Kinect really isn’t that good. In every Kinect demo we’ve ever seen, there are always errors in the 3D tracking or missing data in the point cloud. The Shake ‘n’ Sense, as Microsoft Research calls it, does away with these problems simply by vibrating the IR projector and camera with a single motor.

In addition to getting high quality point clouds from a Kinect, this technique also allows for multiple Kinects to be used in the same room. In the video (and title pic for this post), you can see a guy walking around a room filled with beach balls in 3D, captured from an array of four Kinects.

This opens up the doors to a whole lot of builds that were impossible with the current iteration of the Kinect, but we’re thinking this is far too easy and too clever not to be though of before. We’d love to see some independent verification of this technique, so if you’ve got a Kinect project sitting around, strap a motor onto it, make a video and send it in.

Continue reading “Building A Better Kinect With A… Pager Motor?”

Robot Trash Can Catches Anything You Throw Near It

This guy is about to toss the blue ball half way between the book shelf and the waste basket. By the time it gets there the waste basket will have moved into position to catch the ball perfectly. It’ll do the same for just about anything you throw.

We’re unable to read the captions but it looks like this may have been made as part of a commercial which is shown in the first few seconds of the video after the break. From there we see the development of a locomotive mechanism which will fit into the bottom of the bin. It start as a single swivel wheel, but gets more complicated quite quickly. Once the low-profile three-wheeler is milled and assembled it’s time to start writing the code to translate input from a Kinect 3D camera and extrapolate the position for catching the trash. The final result seems to do this perfectly.

Continue reading “Robot Trash Can Catches Anything You Throw Near It”

Finger Recognition On The Kinect

The Kinect is awesome, but if you want to do anything at a higher resolution detecting a person’s limbs, you’re out of luck. [Chris McCormick] over at CogniMem has a great solution to this problem: use a neural network on a chip to recognize fingers with hardware already connected to your XBox.

The build uses the very cool CogniMem CM1K neural network on a chip trained to tell the difference between counting from one to four on a single hand, as well as an ‘a-okay’ sign, Vulcan greeting (shown above), and rocking out at a [Dio] concert. As [Chris] shows us in the video, these finger gestures can be used to draw on a screen and move objects using only an open palm and closed fist; not too far off from the Minority Report and Iron Man UIs.

If you’d like to duplicate this build, we found the CM1K neural network chip available here for a bit more than we’d be willing to pay. A neural net on a chip is an exceedingly cool device, but it looks like this build will have to wait for the Kinect 2 to make it down to the consumer and hobbyist arena.

You can check out the videos of Kinect finger recognition in action after the break with World of Goo and Google Maps.

Continue reading “Finger Recognition On The Kinect”

Going To The Park With Your Augmented Reality Girlfriend

Lonely? Bored? Really into J-pop? If you’re any of these things, here’s the build for you. It’s an augmented reality system that allows you to go on a date with one of Japan’s most popular virtual singers.

The character chosen to show off this augmented reality girlfriend tech is [Hatsune Miku], a voice synthesizer personified as a doll-eyed anime  avatar. [Miku] is an immensely popular character in Japan, with thousands of people going to her concerts, so choosing her for this augmented reality girlfriend project was an obvious choice.

The build details for this hack are a little sparse, confounded by the horrible Google Translate results of the blog linked in the YouTube description. From what we can gather from the video and this twitter account, the build is based on an ASUS Xtion Kinect clone and a nice pair of video goggles.

We’re expecting the comments for this post to fill up with, ‘Japan is really weird’ comments, but we can see a few very, very cool applications of this tech. For instance, think how cool it would be to be guided around a science museum by [Einstein], or around Philadelphia by [Ben Franklin].

Kinetic Space: Software For Your Kinect Projects

For all of you that  found yourselves wanting to use Kinect to control something but had no idea what to do with it, or how to get the data from it, you’re in luck. Kineticspace is a tool available for Linux/mac/windows that gives you the tools necessary to set up gesture controls quickly and easily. As you can see in the video below, it is fairly simple to set up. You do you action, set the amount of influence from each body part (basically telling it what to ignore), and save the gesture. This system has already been used for tons of projects and has now hit version 2.0.

Continue reading “Kinetic Space: Software For Your Kinect Projects”

3D Mapping Of Huge Areas With A Kinect

The picture you see above isn’t a doll house, nocliped video game, or any other artificially created virtual environment. That bathroom exists in real life, but was digitized into a 3D object with a Kinect and Kintinuous, an awesome piece of software that allows for the creation of huge 3D environments in real time.

Kintinuous is an extension of the Kinect Fusion and ReconstructMe projects. Where Fusion and ReconstructMe were limited to mapping small areas in 3D – a tabletop, for example, Kintinuous allows a Kinect to me moved from room to room, mapping an entire environment in 3D.

The paper for Kintinuous is available going over how the authors are able to capture point cloud data and overlay the color video to create textured 3D meshes. After the break are two videos showing off what Kintinuous can do. It’s jaw dropping, and the implications are amazing. We can’t find the binaries or source for Kintinuous, but if anyone finds a link, drop us a line and we’ll update this post.

Continue reading “3D Mapping Of Huge Areas With A Kinect”