Lighting The Way For The Visually Impaired

The latest creation from Bengali roboticist [nabilphysics] might sound familiar. His laser-augmented glove gives users the ability to detect objects horizontally in front of them, much like a cane or pole is used by the visually impaired to navigate through a physical space.

As a stand in for the physical cane, he uses the VL53L0X time-of-flight (TOF) sensor which detects the time taken for a laser source to bounce back to the sensor. Theses are much more accurate than IR distance sensors and have a much finer focus than ultrasonic sensors for excellent directionality.

While the sensors can succumb to interferences from background light or other time-of-flight sensors, the main advantages are speed of calculation (it relies on a single shot to compute the distances within a scene) and an efficient distance algorithm that simplifies the measurement of distance data. In contrast to stereo vision, which requires complex correlation algorithms, the process for extracting information for a time-of-flight sensor is entirely direct, requiring a small amount of processing power.

The glove delivers haptic feedback to the user to determine if an object is in their way. The feedback is controlled through an Arduino Pro Mini, powered remotely by a LiPo battery. The code is uploaded to the Arduino from an FTDI adapter, and works by taking continuous readings from the time-of-flight sensor and determining if the object in front is within 450 millimeters of the glove, at which point it triggers the vibration motor to alert the user of the object’s presence.

Since the glove used for the project is a bicycle glove, the form factor is straightforward — the Arduino, motor, battery, and switch are all located inside a plastic box on the top of the glove, while the time-of-flight sensor sticks out to make continuous measurements when the glove is switched on.

In general, the setup is fairly simple, but the idea of using a time-of-flight sensor rather than an IR or sonar sensor is interesting. In the broader usage of sensors, LIDARs are already the de facto sensor used for autonomous vehicles and robotic components that rely on distance sensing. This three-dimensional data wouldn’t be much use here and this sensor works without mechanical moving parts since it doesn’t rely on the point-by-point scan from a laser beam that LIDAR systems use.

8 thoughts on “Lighting The Way For The Visually Impaired

    1. yes, sunlight is really a killer for these ToF for sure – would need a much stronger IR flash source to get this solved. The VL53L1x has a much bigger range but is even more sensitve to sunlight, in my experience.

  1. There is a device that gives instant haptic feedback for a blind or visually impaired person, doesn’t require recharging, doesn’t make person look like lame Borg cosplayer on Star Trek convention, works in any conditions and is very versatile in general. It’s called a cane. And it works like a charm. I know, because I learned to use one when I was 8 and my mother forced me to take Being Blind 101 course at my school. If one needs an upgrade and doesn’t want his/hers cane to look like cross between metal detector and alien probe, one can pick a properly trained dog. Or other animal. My blind brother uses assist cat:

    Every HaD prize contest has at least one of those stupid “better than cane” devices that never go anywhere. Fist they were made with ultrasound and IR sensors, now with TOF sensors. I expect that in few more years someone will glue a radar to a bicycle helmet and call it a “better cane” device for the blind. Lazy engineering at its best…

    I’m visually impaired. I can’t make my perfect assistive device because I’m not skilled enough and I can’t solder very small components (no microscope). My idea is an augmented reality device that would enhance my vision. It should look sleek and sexy, maybe like VR headset. No Arduinos or RPis hanging on the frame of ski googles. I’m no Borg cosplayer. Just put screen over my working eye (right) and camera(s) and sensors over dead one, and connect them via cables to main unit carried in pocket or clipped to a belt.
    These are functions I’d like to see in such a device:
    – highlighting of edges and every obstacle on my path;
    – ability to zoom at least 10x;
    – detection and highlighting of all signs, street names, bus numbers or prices in stores with quick zoom option and optionally OCR;
    – auto-zoom on traffic lights in form of PIP;
    – peripheral vision alarm that detects fast-moving objects from the sides;
    – face recognition with database of people I know;
    – NVG function – I’m alms completely blind after sunset;
    – adjustable brightness and contrast, AGC;
    – path recorder and navigation.
    Yes, this is challenge for the next HaD prize.

    1. sounds for me like an eSight device with a few addons. (
      I also experimented with crane substitution devices as my father did not like to start using one. But, even with only 3 beam lidar and corresponding 3 frequency audio feedback it is very hard and too frustrating to navigate with it.

      1. I don’t think eSight is available in my country. And did you see the price? Waaay too expensive for my pocket. And National Health Fund won’t pay for it either. They didn’t want to pay equivalent of 100 bucks for magnifying glasses…

        1. Yes, the price tag is heavy and on top the here requested feature list will not bringt that price down. And, even if there would be an open source solution, the price tag will still be significant based on the sensor array needed. Unfortunately, it is always the same for kind of niche products.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.