Finger recognition on the Kinect

The Kinect is awesome, but if you want to do anything at a higher resolution detecting a person’s limbs, you’re out of luck. [Chris McCormick] over at CogniMem has a great solution to this problem: use a neural network on a chip to recognize fingers with hardware already connected to your XBox.

The build uses the very cool CogniMem CM1K neural network on a chip trained to tell the difference between counting from one to four on a single hand, as well as an ‘a-okay’ sign, Vulcan greeting (shown above), and rocking out at a [Dio] concert. As [Chris] shows us in the video, these finger gestures can be used to draw on a screen and move objects using only an open palm and closed fist; not too far off from the Minority Report and Iron Man UIs.

If you’d like to duplicate this build, we found the CM1K neural network chip available here for a bit more than we’d be willing to pay. A neural net on a chip is an exceedingly cool device, but it looks like this build will have to wait for the Kinect 2 to make it down to the consumer and hobbyist arena.

You can check out the videos of Kinect finger recognition in action after the break with World of Goo and Google Maps.

[Read more...]

Going to the park with your augmented reality girlfriend

Lonely? Bored? Really into J-pop? If you’re any of these things, here’s the build for you. It’s an augmented reality system that allows you to go on a date with one of Japan’s most popular virtual singers.

The character chosen to show off this augmented reality girlfriend tech is [Hatsune Miku], a voice synthesizer personified as a doll-eyed anime  avatar. [Miku] is an immensely popular character in Japan, with thousands of people going to her concerts, so choosing her for this augmented reality girlfriend project was an obvious choice.

The build details for this hack are a little sparse, confounded by the horrible Google Translate results of the blog linked in the YouTube description. From what we can gather from the video and this twitter account, the build is based on an ASUS Xtion Kinect clone and a nice pair of video goggles.

We’re expecting the comments for this post to fill up with, ‘Japan is really weird’ comments, but we can see a few very, very cool applications of this tech. For instance, think how cool it would be to be guided around a science museum by [Einstein], or around Philadelphia by [Ben Franklin].

Kinetic Space: software for your Kinect projects

For all of you that  found yourselves wanting to use Kinect to control something but had no idea what to do with it, or how to get the data from it, you’re in luck. Kineticspace is a tool available for Linux/mac/windows that gives you the tools necessary to set up gesture controls quickly and easily. As you can see in the video below, it is fairly simple to set up. You do you action, set the amount of influence from each body part (basically telling it what to ignore), and save the gesture. This system has already been used for tons of projects and has now hit version 2.0.

[Read more...]

3D mapping of huge areas with a Kinect

The picture you see above isn’t a doll house, nocliped video game, or any other artificially created virtual environment. That bathroom exists in real life, but was digitized into a 3D object with a Kinect and Kintinuous, an awesome piece of software that allows for the creation of huge 3D environments in real time.

Kintinuous is an extension of the Kinect Fusion and ReconstructMe projects. Where Fusion and ReconstructMe were limited to mapping small areas in 3D – a tabletop, for example, Kintinuous allows a Kinect to me moved from room to room, mapping an entire environment in 3D.

The paper for Kintinuous is available going over how the authors are able to capture point cloud data and overlay the color video to create textured 3D meshes. After the break are two videos showing off what Kintinuous can do. It’s jaw dropping, and the implications are amazing. We can’t find the binaries or source for Kintinuous, but if anyone finds a link, drop us a line and we’ll update this post.

[Read more...]

More Kinect holograms from [programming4fun]

[programing4fun] has been playing around with his Kinect-based 3D display and building a holographic WALL-E controllable with a Windows phone. It’s a ‘kid safe’ version of his Terminator personal assistant that has voice control and support for 3d anaglyph and shutter glasses.

When we saw [programming4fun]‘s Kinect hologram setup last summer we were blown away. By tracking a user’s head with a Kinect, [programming] was able to display a 3D image using only a projector. This build was adapted into a 3D multitouch table and real life portals, so we’re glad to see [programming4fun] refining his code and coming up with some really neat builds.

In addition to robotic avatars catering to your every wish, [programming4fun] also put together a rudimentary helicopter flight simulator controlled by tilting cell phone. It’s the same DirectX 9 heli from [programming]‘s original build. with the addition of Desert Strike-esque top-down graphics. This might be the future of gaming here, so we’ll keep our eyes out for similar head-tracking 3D builds.

As always, videos after the break.

[Read more...]

Making real-life portals with a Kinect

[radicade] wanted to know what real life portals would look like; not something out of a game, but actual blue and orange portals on his living room wall. Short of building a portal gun, the only option available to [radicade] was simulating a pair of portals with a Kinect and a projector.

One of the more interesting properties of portals is the ability to see through to the other side – you can look through the blue portal and see the world from the orange portal’s vantage point. [radicade] simulated the perspective of a portal using the head-tracking capabilities of a Kinect.

The Kinect grabs the depth map of a room, and calculates what peering through a portal would look like. This virtual scene is projected onto a wall behind the Kinect, creating the illusion of real-life orange and blue portals.

We’ve seen this kind of pseudo-3D, head tracking display before (1, 2), so it’s no surprise the 3D illusion of portals would carry over to a projected 3D display. You can check out [radicade]‘s portal demo video after the break.

[Read more...]

Sandbox topographical play gets a big resolution boost

Here’s another virtual sandbox meets real sandbox project. A team at UC Davis is behind this depth-mapped and digitally projected sandbox environment. The physical sandbox uses fine-grained sand which serves nicely as a projection surface as well as a building medium. It includes a Kinect depth camera over head, and an offset digital projector to add the virtual layer. As you dig or build elevation in parts of the box, the depth camera changes the projected view to match in real-time. As you can see after the break, this starts with topographical data, but can also include enhancements like the water feature seen above.

It’s a big step forward in resolution compared to the project from which the team took inspiration. We have already seen this concept used as an interactive game. But we wonder about the potential of using this to quickly generate natural environments for digital gameplay. Just build up your topography in sand, jump into the video game and make sure it’s got the attributes you want, then start adding in trees and structures.

Don’t miss the video demo embedded after the break.

[Read more...]

Follow

Get every new post delivered to your Inbox.

Join 92,317 other followers