We have seen Kinect used in a variety of clever ways over the last few months, but some students at the [University of Konstanz] have taken Kinect hacking to a whole new level of usefulness. Rather than use it to control lightning or to kick around some boxes using Garry’s Mod, they are using it to develop Navigational Aids for the Visually Impaired, or NAVI for short.
A helmet-mounted Kinect sensor is placed on the subject’s head and connected to a laptop, which is stored in the user’s backpack. The Kinect is interfaced using custom software that utilizes depth information to generate a virtual map of the environment. The computer sends information to an Arduino board, which then relays those signals to one of three waist-belt mounted LilyPad Arduinos. The LilyPads control three motors, which vibrate in order to alert the user to obstacles. The group even added voice notifications via specialized markers, allowing them to prompt the user to the presence of doors and other specific items of note.
It really is a great use of the Kinect sensor, we can’t wait to see more projects like this in the future.
Stick around to see a quick video of NAVI in use.
[via Kinect-Hacks – thanks, Jared]
[youtube=http://www.youtube.com/watch?v=l6QY-eb6NoQ&w=470]
Good show. I would have gone for belt-mounted myself to get more of the camera’s FOV across the space the person will be passing through – it looks like you lose a lot of near-field vision as a result which could lead to the user tripping over or colliding with objects. But the helmet mount was probably easiest to build and wear. The ARTK fiducials are a nice touch, giving an absolute reference versus a known space for navigation purposes.
Well, I guess now we know why an accelerometer is in the Kinect (though I don’t think they use it in this project).
It’s a good thing the blind can’t see how ridiculous they look with a Kinect on their head.
I wonder if adding prisms to modify the FOV would work so that the kinect looked down and forward or if it’d totally ruin how it works.
Voice or something like that translated one or two web cam images into sounds. sent to the ears. Pretty cool set up, the kinect hacks keep on comming
While it’s not your mother tongue, you share the same alphabet and so should be able to get “University of Konstanz” right! :-)
Thumbs up for the t-shirt.
¡Viva Sankt Pauli!
Nice idea. Great use of a toy technology for something good.
@INquiRY: I think it was likely a fat finger on this guys part – notice how the “A” is right next to the “S” on a keyboard.
Nice use of the internet though bro! Where would we be without you??
From what I understand, the human brain in general and the visual cortex in particular are highly adaptable and ready to incorporate new data.
It seems a shame to take this enormous amount of useful data from the Kinect and reduce it down to three motors.
For example, this project uses the tongue to convey the visual information. People using the device and learn to “see” objects around them like doors, silverware, and elevator buttons.
http://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues
The estimated cost of this referenced device that uses the tongue is a substantial $10000. I suspect that the technology in the Kinect could slash the cost of a device this type by an order of magnitude or two. I also suspect that less sensitive but more discreet body parts could be used.
Holy crap, 3 arduinos for 3 vibros…. one tiny2313 can be used to drive 10 or even more with pwm. That’s why I hate arduino…
Oh, and btw, I guess using range sensors on the belt coupled with vibros should prove to be somewhat easier to make with the same effect, kinect is an overkill here. 2d code recognition can be done with a cheap android phone with a usb camera.
I did not see 3 Arduinos anywhere, from the video they have one Arduino Duemilanove with three sets of motors connected to output pins.
This may not be the coolest Kinect hack but it is the best.
Take the cameras and light source of the boards and put them on a medallion the user could wear around their neck. Rig up one of those fancy new dual-core Android to do all the computing and to provide GPS for outdoor use.
@CalcProgrammer1: Read carefully: The computer sends information to an Arduino board, which then relays those signals to one of three waist-belt mounted LilyPad Arduinos. The LilyPads control three motors, which vibrate in order to alert the user to obstacles.
This is insane overkill, damn it. Much like the way arduinoers like it to do.
I just love the “debuging backpack”!
@Necromant, others.
The description says:
“three waist-belt mounted LilyPad Arduinos”
Via and source say:
“three pairs of Arduino LilyPad vibration motors”
Based on the pictures, the wires, and the size of the devices, I expect it means:
“three pairs of LilyPad vibration motors”
http://www.sparkfun.com/products/8468
There’s nowhere there to hide 3 lilypads, and no wires going to any.
All 3 sites need fixing though….
And by “hide 3 lilypads” I mean “hide 3 LilyPad Arduinos” of course, Doh!
@Jonathan. Hm… Well, that’s better, but I’d still be of using a tiny2313 though. Or through away kinect&laptop and use proximity sensors and vibros with a mega8 do control everything. AFAIK a project like that was already featured here.
Would it possible to create a sound field based on the Kinect information?
i really like the idea of this build. very bulky beginnings but i think it’s a good idea. if they write the program as an app for a smartphone and had bluetooth webcam and headset it would bring down the size of the project if they can get the cam to connect with the phone also. great job though.
I really hope the NAVI gives audio directions, like “HEY”, “LISTEN” or “WATCH OUT”.
I love hacks like this.
No mention of Geordi LaForge yet. I know it is a number of generations from him, but, still. :)
@Rick
“I also suspect that less sensitive but more discreet body parts could be used.”
Eww…
What if two blind people meet? The Kinect is projecting an image, would it get scrambled by the other kinect?
And about the overkill: as it is a proof of concept, I don’t really mind about the bloat. This can easily be cut out in a later version.
Wire that mother straight into their brain!!!
I know that visual neural prostheses have been attempted and I’ve heard they aren’t all that good. I wonder if it would be easier to pipe in something like the kinekt depth map instead of a full camera image.
@birdmun: Actually, we’re much closer to that than you think. High end medical researchers (who, unlike this hack, can implant devices directly into a person’s head) have been able to interface digital camera technology with the optic nerve at the back of the eye. The resolution is pretty low (originally just enough to tell light from dark, but newer generations can, supposedly, be used to read oversized text) but it has been able to give vision to people thagt are completely blind.