Giving “sight” To The Visually Impaired With Kinect

NAVI

We have seen Kinect used in a variety of clever ways over the last few months, but some students at the [University of Konstanz] have taken Kinect hacking to a whole new level of usefulness. Rather than use it to control lightning or to kick around some boxes using Garry’s Mod, they are using it to develop Navigational Aids for the Visually Impaired, or NAVI for short.

A helmet-mounted Kinect sensor is placed on the subject’s head and connected to a laptop, which is stored in the user’s backpack. The Kinect is interfaced using custom software that utilizes depth information to generate a virtual map of the environment. The computer sends information to an Arduino board, which then relays those signals to one of three waist-belt mounted LilyPad Arduinos. The LilyPads control three motors, which vibrate in order to alert the user to obstacles. The group even added voice notifications via specialized markers, allowing them to prompt the user to the presence of doors and other specific items of note.

It really is a great use of the Kinect sensor, we can’t wait to see more projects like this in the future.

Stick around to see a quick video of NAVI in use.

[via Kinect-Hacks – thanks, Jared]

[youtube=http://www.youtube.com/watch?v=l6QY-eb6NoQ&w=470]

27 thoughts on “Giving “sight” To The Visually Impaired With Kinect

  1. Good show. I would have gone for belt-mounted myself to get more of the camera’s FOV across the space the person will be passing through – it looks like you lose a lot of near-field vision as a result which could lead to the user tripping over or colliding with objects. But the helmet mount was probably easiest to build and wear. The ARTK fiducials are a nice touch, giving an absolute reference versus a known space for navigation purposes.

  2. Nice idea. Great use of a toy technology for something good.

    @INquiRY: I think it was likely a fat finger on this guys part – notice how the “A” is right next to the “S” on a keyboard.

    Nice use of the internet though bro! Where would we be without you??

  3. From what I understand, the human brain in general and the visual cortex in particular are highly adaptable and ready to incorporate new data.

    It seems a shame to take this enormous amount of useful data from the Kinect and reduce it down to three motors.

    For example, this project uses the tongue to convey the visual information. People using the device and learn to “see” objects around them like doors, silverware, and elevator buttons.

    http://www.scientificamerican.com/article.cfm?id=device-lets-blind-see-with-tongues

    The estimated cost of this referenced device that uses the tongue is a substantial $10000. I suspect that the technology in the Kinect could slash the cost of a device this type by an order of magnitude or two. I also suspect that less sensitive but more discreet body parts could be used.

  4. Oh, and btw, I guess using range sensors on the belt coupled with vibros should prove to be somewhat easier to make with the same effect, kinect is an overkill here. 2d code recognition can be done with a cheap android phone with a usb camera.

  5. This may not be the coolest Kinect hack but it is the best.

    Take the cameras and light source of the boards and put them on a medallion the user could wear around their neck. Rig up one of those fancy new dual-core Android to do all the computing and to provide GPS for outdoor use.

  6. @CalcProgrammer1: Read carefully: The computer sends information to an Arduino board, which then relays those signals to one of three waist-belt mounted LilyPad Arduinos. The LilyPads control three motors, which vibrate in order to alert the user to obstacles.

    This is insane overkill, damn it. Much like the way arduinoers like it to do.

  7. @Necromant, others.

    The description says:
    “three waist-belt mounted LilyPad Arduinos”
    Via and source say:
    “three pairs of Arduino LilyPad vibration motors”

    Based on the pictures, the wires, and the size of the devices, I expect it means:
    “three pairs of LilyPad vibration motors”
    http://www.sparkfun.com/products/8468

    There’s nowhere there to hide 3 lilypads, and no wires going to any.
    All 3 sites need fixing though….

  8. @Jonathan. Hm… Well, that’s better, but I’d still be of using a tiny2313 though. Or through away kinect&laptop and use proximity sensors and vibros with a mega8 do control everything. AFAIK a project like that was already featured here.

  9. i really like the idea of this build. very bulky beginnings but i think it’s a good idea. if they write the program as an app for a smartphone and had bluetooth webcam and headset it would bring down the size of the project if they can get the cam to connect with the phone also. great job though.

  10. What if two blind people meet? The Kinect is projecting an image, would it get scrambled by the other kinect?

    And about the overkill: as it is a proof of concept, I don’t really mind about the bloat. This can easily be cut out in a later version.

  11. Wire that mother straight into their brain!!!
    I know that visual neural prostheses have been attempted and I’ve heard they aren’t all that good. I wonder if it would be easier to pipe in something like the kinekt depth map instead of a full camera image.

  12. @birdmun: Actually, we’re much closer to that than you think. High end medical researchers (who, unlike this hack, can implant devices directly into a person’s head) have been able to interface digital camera technology with the optic nerve at the back of the eye. The resolution is pretty low (originally just enough to tell light from dark, but newer generations can, supposedly, be used to read oversized text) but it has been able to give vision to people thagt are completely blind.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.