Since the Kinect has become so popular among hackers, [Brad Simpson] over at IDEO Labs finally purchased one for their office and immediately got to tinkering. In about 2-3 hours time, he put together a pretty cool physics demo showing off some of the Kinect’s abilities.
Rather than using rough skeleton measurements like most hacks we have seen, he paid careful attention to the software side of things. Starting off using the Kinect’s full resolution (something not everybody does) [Brad] took the data and manipulated it quite a bit before creating the video embedded below. Skeleton data was collected and run through several iterations of a smoothing algorithm to substantially reduce the noise surrounding the resulting outline.
The final product is quite a bit different than the Kinect videos we are used to seeing, and it drastically improves how the user is able to interact with virtual objects added to the environment. As you may have noticed, the blocks that were added to the video never rarely penetrate the outline of the individual in the movie. This isn’t due to some sort of digital trickery – [Brad] was able to prevent the intersection of different objects via his tweaking of the Kinect data feed.
We’re not sure how much computing power this whole setup requires, but the code is available from their Google Code repository, so we hope to see other projects refined by utilizing the techniques shown off here.
[via KinectHacks]
[vimeo http://vimeo.com/22219563 w=470]
Now for some haptic feedback! Really smooth motion, the refresh rate seems quicker than other demos.
How about setting up a projector to show this image on a screen? If you stand with your back to the kinect (and the projector) and have the blocks bouncing around on the wall with you in the middle controlling the motion as above.
For those wanting to play with Kinect, I found these replacement Kinect PCB’s: http://www.hongkongtrades.com/post/Genuine-XBOX-360-Kinect-sensor-mainboards.aspx
Jan D: Why would I want to replace the PCB? It’s fine as it is.
That’s one of the best demos I have seen. I hope Microsofts eventual SDK has this kind of outline option along with the skeleton model and voice control.
Theres a demo similar to this on the xbox devkit, using a spark particle raining from the ceiling.
The best is when the blocks get stuck inside!
That can be done with a simple webcam.
Now Do it in 3d.
@#hackius because kinect is 100-150$ or whatever, and those replacement boards are 88$ its the sensors and all the hardware for a little bit cheaper
so if you wanted to do any kind of hardware shenanigans with the kinect, that would be a slightly more affordable way to go lol
Why was a kinect needed for this?
It’s a Hough transform.
like anfegori91 said, you can do this with any webcam.
Maybe with a Kinect it is more robust than with a normal camera. (Works with any lighting and background color).
it’s done with a kinect because its robust, this can be done with webcams only if the lighting and setting are well adjusted, and because it uses dynamic filtering of depth data to find and outline the moving object(s) near that depth. it helps to actually read. and its not a hough transform, it’s probably canny edge detection.
Has anyone tried three kinects, x, y, and z, to create a rough 3d version of this?
@iknowthesethings: but that’s just the mainboard. As I understand it it’s FOR the Kinect.