Encase Yourself In Carbonite With Kinect

kinect_plastic_renders

There never seems to be a lull in the stream of new and novel hacks that people create around Microsoft’s Kinect. One of the more recent uses for the device comes from [Interactive Fabrication] and allows you to fabricate yourself, in a manner of speaking.

The process uses the Kinect to create a 3D model of a person, which is then displayed on a computer monitor. Once you have selected your preferred pose, a model of the image is rendered by a 3D plastic printer. Each scan results in a 3cm x 3cm plastic model complete with snap together dovetail joints allowing the models to be combined together. A full body scan can be constructed with three of these tiles, resulting in a neat “Han Solo trapped in Carbonite” effect.

Currently only about 1/3 of Kinect’s full resolution is being used to create these models, which is pretty promising news to those who would try this at home. Theoretically, you should be able to create larger, more detailed images of yourself provided you have a 3D printer at your disposal.

Keep reading for a quick video presentation of the fabrication process.

Continue reading “Encase Yourself In Carbonite With Kinect”

Kinect To Get Windows Drivers Months After Open Source Drivers Were Developed

Microsoft is planning to release Windows drivers for the Kinect this spring, months after open source drivers were developed by a motivated hacking community. [Johnny Chung Lee], who worked with the Microsoft team when the hardware was developed, mentions that he had pushed for the giant to develop and release at least basic Windows drivers. That refusal led him to a position as top cheerleader and bounty contributor in Adafruit’s Open Kinect Contest which resulted (quickly we might add) in the availability of open source drivers. If you’ve been following Hackaday or any other tech blogs the last three months you’ll know that an explosion of projects using the Kinect followed, and [Johnny] figures Microsoft’s decision to release Windows drivers is an attempt to ride this wave on their own flagship OS rather than continue to watch from the sidelines.

Inexpensive Robot Platform Combines Mass-produced Parts

Meet Bilibot, a modular robot that aims to lower the cost of entry for robotic tinkerers. It combines the Kinect, the iRobot Create, and an Ubuntu box running ROS using some laser cut mounting brackets. These are relatively inexpensive components but the most exciting thing is that there’s already a slew of example out there that use this hardware. For instance, we looked in on ROS body tracking in January that can be directly plucked and used with this hardware. You’ll recognize the base as the iRobot create which was used in video chat robot from last week. The brains of the operation come in a choice of three Linux boxes – two headless and one laptop – which have ROS pre-installed. Watch the open-source autonomy as it tools around the office in the video after the break.

Continue reading “Inexpensive Robot Platform Combines Mass-produced Parts”

Kinect Home Theater Control

kinect_home_theater

[Harishankar] has posted a video on his blog demonstrating the ability to control devices using the Microsoft Kinect sensor via IR. While controlling devices with Kinect is nothing new, he is doing something a little different than you have seen before. The Kinect directly interfaces with his Mac Mini and tracks his movements via OpenNI. These movements are then compared to a list of predefined gestures, which have been mapped to specific IR functions for controlling his home theater.

Once the gestures have been acknowledged, they are then relayed from the Mac via a USB-UIRT to various home theater components. While there are not a lot of details fleshed out in the blog post, [Harishankar] says he will gladly forward his code to you if you request it via email.

Thanks to [Peter] for the tip.

ROS Gains Full Body Telemetry

[Taylor Veldrop] has been playing with an NAO robot and ROS, mixed with a Kinect to get some pretty amazing results. The last time we saw any work done with ROS and the Kinect, it was allowing some basic telemetry using the PR2. [Tyler] has taken this a step further allowing for full body control of the NAO robot. Basic mimicking mixed with a little bit of autonomy allow the NAO to follow his steps around a room and even slice a bananna, or hammer nails. We think this is pretty impressive, especially if he were to mix it together with a motion tracking stereoscopic display. Follow along after the break to see it pull off some of these cool feats.

Continue reading “ROS Gains Full Body Telemetry”

VR! Now With More Kinect, Wiimote, And Vuzix

Those of us that remember when you could actually go to a mall and play on a VR game machine, tend to remember it fondly. What happened? The computing horsepower has grown so much, our graphics now days are simply stunning, yet there’s been no major VR revival. Yeah, those helmets were huge and gave you a headache, but it was worth it.  With the 3d positioning abilities of the latest game crazes, the Wiimote and the Kinect, [Nao_u] is finally taking this where we all knew it should have gone(google translated). Well, maybe we would have had less creepy anime faces flying around squirting ink, but the basics are there. He has created a VR system utilizing the Wiimote for his hand position, a Vuzix display for head positioning, and the kinect for body tracking. Even with the creepy flying heads I want to play it, especially after seeing him physically ducking behind boxes in the video after the break. Long live VR!

Continue reading “VR! Now With More Kinect, Wiimote, And Vuzix”

Projector Tricks Make Use Of Kinect 3D Mapping

[Don’t stop the clock] is doing some work with a projector, a camera, and the Kinect. What he’s accomplished is quite impressive, combining the three to manipulate light with your body. The image above is a safer rendition of the Hadouken from the Street Fighter video games, throwing light across the room instead of fire. This comes at the end of the video after the break, but first he’ll show off the core features of the system. You can hold up your hand and wave it to turn it into a light source. In other words, the projector will shine light on your hand, moving it, and manipulating the intensity based on hand location in 3D space. Since the Kinect is sending fairly precise data back to the computer the projected image is trimmed to match your hand and arm without overflowing onto the rest of the room until you touch your hand to a surface you want illuminated or throw the light source with a flick or the wrist. It may seem trivial at first glance, but we find the alignment of the projector and the speed at which the image updates to be quite impressive.

Continue reading “Projector Tricks Make Use Of Kinect 3D Mapping”