Building a better Kinect with a… pager motor?

Fresh from Microsoft Research is an ingenious way to reduce interference and decrease the error in a Kinect. Bonus: the technique only requires a motor with an offset weight, or just an oversized version of the vibration motor found in a pager.

Being the first of its kind of commodity 3D depth sensors, the tracking on a Kinect really isn’t that good. In every Kinect demo we’ve ever seen, there are always errors in the 3D tracking or missing data in the point cloud. The Shake ‘n’ Sense, as Microsoft Research calls it, does away with these problems simply by vibrating the IR projector and camera with a single motor.

In addition to getting high quality point clouds from a Kinect, this technique also allows for multiple Kinects to be used in the same room. In the video (and title pic for this post), you can see a guy walking around a room filled with beach balls in 3D, captured from an array of four Kinects.

This opens up the doors to a whole lot of builds that were impossible with the current iteration of the Kinect, but we’re thinking this is far too easy and too clever not to be though of before. We’d love to see some independent verification of this technique, so if you’ve got a Kinect project sitting around, strap a motor onto it, make a video and send it in.

[Read more...]

Robot trash can catches anything you throw near it

This guy is about to toss the blue ball half way between the book shelf and the waste basket. By the time it gets there the waste basket will have moved into position to catch the ball perfectly. It’ll do the same for just about anything you throw.

We’re unable to read the captions but it looks like this may have been made as part of a commercial which is shown in the first few seconds of the video after the break. From there we see the development of a locomotive mechanism which will fit into the bottom of the bin. It start as a single swivel wheel, but gets more complicated quite quickly. Once the low-profile three-wheeler is milled and assembled it’s time to start writing the code to translate input from a Kinect 3D camera and extrapolate the position for catching the trash. The final result seems to do this perfectly.

[Read more...]

Finger recognition on the Kinect

The Kinect is awesome, but if you want to do anything at a higher resolution detecting a person’s limbs, you’re out of luck. [Chris McCormick] over at CogniMem has a great solution to this problem: use a neural network on a chip to recognize fingers with hardware already connected to your XBox.

The build uses the very cool CogniMem CM1K neural network on a chip trained to tell the difference between counting from one to four on a single hand, as well as an ‘a-okay’ sign, Vulcan greeting (shown above), and rocking out at a [Dio] concert. As [Chris] shows us in the video, these finger gestures can be used to draw on a screen and move objects using only an open palm and closed fist; not too far off from the Minority Report and Iron Man UIs.

If you’d like to duplicate this build, we found the CM1K neural network chip available here for a bit more than we’d be willing to pay. A neural net on a chip is an exceedingly cool device, but it looks like this build will have to wait for the Kinect 2 to make it down to the consumer and hobbyist arena.

You can check out the videos of Kinect finger recognition in action after the break with World of Goo and Google Maps.

[Read more...]

Going to the park with your augmented reality girlfriend

Lonely? Bored? Really into J-pop? If you’re any of these things, here’s the build for you. It’s an augmented reality system that allows you to go on a date with one of Japan’s most popular virtual singers.

The character chosen to show off this augmented reality girlfriend tech is [Hatsune Miku], a voice synthesizer personified as a doll-eyed anime  avatar. [Miku] is an immensely popular character in Japan, with thousands of people going to her concerts, so choosing her for this augmented reality girlfriend project was an obvious choice.

The build details for this hack are a little sparse, confounded by the horrible Google Translate results of the blog linked in the YouTube description. From what we can gather from the video and this twitter account, the build is based on an ASUS Xtion Kinect clone and a nice pair of video goggles.

We’re expecting the comments for this post to fill up with, ‘Japan is really weird’ comments, but we can see a few very, very cool applications of this tech. For instance, think how cool it would be to be guided around a science museum by [Einstein], or around Philadelphia by [Ben Franklin].

Kinetic Space: software for your Kinect projects

For all of you that  found yourselves wanting to use Kinect to control something but had no idea what to do with it, or how to get the data from it, you’re in luck. Kineticspace is a tool available for Linux/mac/windows that gives you the tools necessary to set up gesture controls quickly and easily. As you can see in the video below, it is fairly simple to set up. You do you action, set the amount of influence from each body part (basically telling it what to ignore), and save the gesture. This system has already been used for tons of projects and has now hit version 2.0.

[Read more...]

3D mapping of huge areas with a Kinect

The picture you see above isn’t a doll house, nocliped video game, or any other artificially created virtual environment. That bathroom exists in real life, but was digitized into a 3D object with a Kinect and Kintinuous, an awesome piece of software that allows for the creation of huge 3D environments in real time.

Kintinuous is an extension of the Kinect Fusion and ReconstructMe projects. Where Fusion and ReconstructMe were limited to mapping small areas in 3D – a tabletop, for example, Kintinuous allows a Kinect to me moved from room to room, mapping an entire environment in 3D.

The paper for Kintinuous is available going over how the authors are able to capture point cloud data and overlay the color video to create textured 3D meshes. After the break are two videos showing off what Kintinuous can do. It’s jaw dropping, and the implications are amazing. We can’t find the binaries or source for Kintinuous, but if anyone finds a link, drop us a line and we’ll update this post.

[Read more...]

More Kinect holograms from [programming4fun]

[programing4fun] has been playing around with his Kinect-based 3D display and building a holographic WALL-E controllable with a Windows phone. It’s a ‘kid safe’ version of his Terminator personal assistant that has voice control and support for 3d anaglyph and shutter glasses.

When we saw [programming4fun]‘s Kinect hologram setup last summer we were blown away. By tracking a user’s head with a Kinect, [programming] was able to display a 3D image using only a projector. This build was adapted into a 3D multitouch table and real life portals, so we’re glad to see [programming4fun] refining his code and coming up with some really neat builds.

In addition to robotic avatars catering to your every wish, [programming4fun] also put together a rudimentary helicopter flight simulator controlled by tilting cell phone. It’s the same DirectX 9 heli from [programming]‘s original build. with the addition of Desert Strike-esque top-down graphics. This might be the future of gaming here, so we’ll keep our eyes out for similar head-tracking 3D builds.

As always, videos after the break.

[Read more...]

Follow

Get every new post delivered to your Inbox.

Join 97,511 other followers