Super refined Kinect physics demo

kinect_demo

Since the Kinect has become so popular among hackers, [Brad Simpson] over at IDEO Labs finally purchased one for their office and immediately got to tinkering. In about 2-3 hours time, he put together a pretty cool physics demo showing off some of the Kinect’s abilities.

Rather than using rough skeleton measurements like most hacks we have seen, he paid careful attention to the software side of things. Starting off using the Kinect’s full resolution (something not everybody does) [Brad] took the data and manipulated it quite a bit before creating the video embedded below. Skeleton data was collected and run through several iterations of a smoothing algorithm to substantially reduce the noise surrounding the resulting outline.

The final product is quite a bit different than the Kinect videos we are used to seeing, and it drastically improves how the user is able to interact with virtual objects added to the environment. As you may have noticed, the blocks that were added to the video never rarely penetrate the outline of the individual in the movie. This isn’t due to some sort of digital trickery – [Brad] was able to prevent the intersection of different objects via his tweaking of the Kinect data feed.

We’re not sure how much computing power this whole setup requires, but the code is available from their Google Code repository, so we hope to see other projects refined by utilizing the techniques shown off here.

[via KinectHacks]

15 thoughts on “Super refined Kinect physics demo

  1. How about setting up a projector to show this image on a screen? If you stand with your back to the kinect (and the projector) and have the blocks bouncing around on the wall with you in the middle controlling the motion as above.

  2. That’s one of the best demos I have seen. I hope Microsofts eventual SDK has this kind of outline option along with the skeleton model and voice control.

  3. @#hackius because kinect is 100-150$ or whatever, and those replacement boards are 88$ its the sensors and all the hardware for a little bit cheaper

  4. so if you wanted to do any kind of hardware shenanigans with the kinect, that would be a slightly more affordable way to go lol

  5. Maybe with a Kinect it is more robust than with a normal camera. (Works with any lighting and background color).

  6. it’s done with a kinect because its robust, this can be done with webcams only if the lighting and setting are well adjusted, and because it uses dynamic filtering of depth data to find and outline the moving object(s) near that depth. it helps to actually read. and its not a hough transform, it’s probably canny edge detection.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s