This two handed glove input setup, by [Sean Chen] and [Evan Levine], is one step closer to achieving that [Tony Stark] like workstation; IE, interacting with software in 3D with simple hand gestures. Dubbed the Mister Gloves, the system incorporates accelerometer, push button, and flex sensor data over RF where an MCU converts it to a standard USB device, meaning no drivers are needed and a windows PC can recognize it as a standard keyboard and mouse. Catch a video of Mister Gloves playing portal after the jump.
While amazing, we’re left wondering if gesture setups are really viable options considering one’s arm(s) surly would get tired? Continue reading “Mister Gloves, gesture input”
MIT is debuting their latest advancement in technology, a multitouch screen that also functions as a gestural interface. The multitouch aspect is nothing new, the team explains how traditional interfaces using LEDs or camera systems do work, but fail to recognize gestures off-screen.
Gestures are a relatively recent highlight with the introduction of projects like Natal or perspective tracking, but fail to work at closer distances to the screen. MIT has done what seems the impossible by combining and modifying the two to produce the first ever multitouch close proximity gestural display.
And to think, just a couple of months ago the same school was playing with pop-up books.
This new video about [Pranav Mistry’s] SixthSense project doesn’t bring us much that we haven’t seen before. At least, not on that project. What really caught our eye was the device he shows off at the beginning of the video. Using two old ball mice, he constructed a grip style input device. It is simple and elegant and we can definitely see using this in future hacks. Not only is it cheap and apparently effective, it seems as though it could be constructed in a very short amount of time. all you need are the wheels that spin when the ball moves, 4 springs and some string. Why didn’t we think of that?
We’re filing this one under “best interface implementation”. This robot is controlled by finger gestures on the surface of an iPod Touch. It can walk forward, turn, sidestep, jump, and kick a ball based on the input it receives from your sweaty digits. Unlike vehicles controlled by an iPhone (or by Power Wheels), this has some potential. Especially considering the inevitable proliferation of multi-touch devices in our everyday lives.
[Benjamin] submitted this slick project. It’s a gesture based control unit for the ipod and iphone. It plugs into the dock port and allows you to control the track and volume with simple gestures. While accellerometer equipped units can already “shake to shuffle”, they lack the ability to simply skip tracks forward or backward. He notes that with an accellerometer, simple gestures can be harder to decipher than with a gyro. The gyro gives the ability to tell which direction you are twisting it, so it’s easier to utilize. [Benjamin] was previously covered when he released the iPodGPS.
Some grad students at Duke University have been working on a new tool for cell phones equipped with accellerometers. The software called Phonepoint Pen, allows you to write with your phone in the air. Though we don’t find the applications they mention very practical, we could see this being very nice for application navigation. If you could program a 3 dimentional gesture to load certain apps, that would be nice.
[floe] wrote in to tell us about his multitouch based thesis work. While many projects have focused on the hardware side of multitouch, TISCH is designed to promote the software side. TISCH is a multiplatform library that features hardware abstraction and gesture recognition. This takes a lot of weight off of widget developers since they can specify known library gestures instead of writing the exact motions from scratch. Using TISCH also means a standard set of gestures across multiple widgets, so the learning curve will be much easier when a user tries out a new app. If you’re researching multitouch, check out this project and help improve the codebase.