This home automation project lets you flap your arms to turn things on and off. [Toon] and [Jiang] have been working on the concept as part of their Master’s thesis at University. It uses a 3D camera with some custom software to pick up your gestures. What we really like is the laser pointer which provides feedback. You can see a red dot on the wall which followers where ever he points. Each controllable device has a special area to which the dot will snap when the user is pointing close to it. By raising his other arm the selected object can be turned on or off.
Take a look at the two videos after the break to get a good overview of the concept. We’d love to see some type of laser projector used instead of just a single dot. This way you could have a pop-up menu system. Imagine getting a virtual remote control on the wall for skipping to the next audio track, adjusting the volume, or changing the TV channel.
I was wondering why kinect was in front of the user, seems like it could get better depth results from a side view.
Depends on what you want the Kinect to see. If you want two armed gestures then you need it to be in front of the user. It does an okay (not great) job of estimating any occluded body parts and since you typically gesture at the object you want to control, the Kinect is best positioned wherever the least body area is blocked. You can adjust for any weird body rotations using some coordinate transforms which is what we did in http://www.youtube.com/watch?v=h7rK06BZiC0 (home automation with Kinect sans laser pointer. Feedback is given by LEDs on the control boxes instead).
The Kinect does not actually estimate where the rest of the body is. That’s all up to the software running on the computer.
In our case, we used IISU by SoftKinetic (the Belgian company we worked with). Their software did quite a good job at making a skeleton as long as you worked with a screen. But when you start abusing the stuff for things it wasn’t inteded for (interfacing with a room), it starts getting a harder time. You start occluding important body parts that form the basis for the calculations.
Fair enough. We were using the official SDK which does the estimation. It is true that the Kinect doesn’t do any processing onboard. It just dumps the depth and color fields and software handles translating that into skeletons. Sorry for the confusion.