Over the years, we’ve seen plenty of projects that use ultrasonic or time-of-flight sensors as object detection methods for the visually impaired. Ultrasonic sensors detect objects like sonar — they send sound pulses and measure the time it takes for the signal to bounce off the object and come back. Time-of-flight sensors do essentially the same thing, but with infrared light. In either case, the notifications often come as haptic feedback on the wrist or head or whatever limb the ultrasonic module is attached to. We often wonder why there aren’t commercially-made shoes that do this, but it turns out there are, and they’re about to get even better.
Today, Tec-Innovation makes shoes with ultrasonic sensors on the toes that can detect objects up to four meters away. The wearer is notified of obstacles through haptic feedback in the shoes as well as an audible phone notification via Bluetooth. The company teamed up with the Graz University of Technology in Austria to give the shoes robot vision that provides even better detail.
Ultrasonic is a great help, but it can’t detect the topography of the obstacle and tell a pothole from a rock from a wall. But if you have a camera on both feet, you can use the data to determine obstacle types and notify the user accordingly. These new models will still have the ultrasonic sensors to do the initial object detection, and use the cameras for analysis.
Whenever they do come out, the sensors will all be connected through the app, which paves the way for crowdsourced obstacle maps of various cities. The shoes will also be quite expensive. Can you do the same thing for less? Consider the gauntlet thrown!
One of the most fascinating examples of the human brain’s plasticity is in its ability to map one sense to another. Some people, for example, report being able to see sound, giving them a supernatural ability to distinguish tones. This effect has also been observed in the visually impaired. There are experiments where grids of electrodes were placed on the tongue or mechanical actuators were placed on the lower back. The signals from a camera were fed into these grids and translated in to shocks or movement. The interesting effect is that the users quickly learned to distinguish objects from this low resolution input. As they continued to use these devices they actually reported seeing the objects as their visual centers took over interpreting this input.
Most of these projects are quite bulky and the usual mess you’d expect from a university laboratory. [Jakob]’s project appears to trend to a much more user-friendly product. A grid of haptics are placed on the back of the user’s hand along with a depth camera. Not only is it somewhat unobtrusive, the back of the hand is very sensitive to touch and the camera is in a prime position to be positioned for a look around the world.
[Jakob] admits that, as an interaction designer, his hardware hacking skills are still growing. To us, the polish and thought that went into this is already quite impressive, so it’s no wonder he’s one of the Hackaday Prize Finalists.
The World Health Organization estimates that around 90% of the 285 million or so visually impaired people worldwide live in low-income situations with little or no access to assistive technology. For his Hackaday Prize entry, [Tiendo] has created a simple and easily reproducible way-finding device for people with reduced vision: a bracelet that detects nearby objects and alerts the wearer to them.
It does its job using an ultrasonic distance sensor and an Arduino Pro Mini. The bracelet has two feedback modes: audio and haptic. In audio mode, the bracelet will begin to beep when an object is within 2.5 meters. And it behaves the way you’d expect—get closer to the object and the beeping increases; back away and it decreases. Haptic mode involves two tiny vibrating disk motors attached to small PVC cuffs that fit on the thumb and pinky. These motors will buzz differently based on the person’s proximity to a given object. If an object is 1 to 2.5 meters away, the pinky motor will vibrate. Closer than that, and it switches over to the thumb motor.
To add to the thriftiness of this project, [Tiendo] re-used other objects where he could. The base of the bracelet is a cuff made from PVC. The nylon chin strap and plastic buckle from a broken bike helmet make it adjustable to fit any wrist. To keep the PVC cuff from chafing, he slipped small pieces from an old pair of socks on to the sides.
It’s easy to see why this project is a finalist in our Best Product contest. It’s a simple, low-cost assistive device made from readily available and recycled materials, and it can be built by anyone who knows a little bit about electronics. Add in the fact that it’s lightweight and frees up both hands, and you have a great product that can help a lot of people. Watch it beep and buzz after the break. Continue reading “Hackaday Prize Entry: A Bracelet For The Blind”→
What would you do if you suddenly went blind and could never again see the sun set? How would you again experience this often breathtaking phenomenon? One answer is music, orchestrated by the sun and the Weather Warlock.
Built by the musician [Quintron] (builder and inventor of insane electronic instruments), the Weather Warlock is an analog synthesizer controlled by — you guessed it — the weather. It translates temperature, moisture, wind and sunlight into tones and harmonics with an E major root chord. UV, light, moisture, and temperature sensors combined with an anemometer set up outside feed the weather data to a synthesizer that has [Quintron] dialing knobs and toggling switches. The Weather Warlock steams 24/7 to the website weatherfortheblind.org so that the visually impaired are able to tune in and experience the joy of sunrise and sunset through music. Continue reading “The Music Of A Sunset”→
One of the hardest things in life is watching your parents grow old. As their senses fail, the simplest things become difficult or even impossible for them to do.
[kjepper]’s mom is slowly losing her sight. As a result, it’s hard for her to see things like the readout on the caller ID. Sure, there are plenty of units and phones she could get that have text-to-speech capabilities, but the audio on those things is usually pretty garbled. And yes, a smartphone can natively display a picture of the person calling, but [kjepper]’s mom isn’t technologically savvy and doesn’t need everything else that comes with a smartphone. What she needs is a really simple interface which makes it clear who’s calling.
Initially, [kjepper] tried to capture the caller ID data using only a USB modem. But for whatever reason, it didn’t work until he added an FSK–DTMF converter between the modem and the Pi. He wrote some Node.js in order to communicate with the Pi and send the information to the screen, which can display up to four calls at once. To make a mom-friendly interface, he stripped an old optical mouse down to the scroll wheel and encased it in wood. Mom can spin the wheel to wake the system up from standby, and click it to mark the calls as read. Now whenever Aunt Judy calls the landline, it’s immediately obvious that it’s her and not some telemarketer.
The “absorbed device user” meme, like someone following Google Maps on a smart phone so closely that they walk out into traffic, is becoming all too common. Not only can an interface that requires face time be a hazard to your health in traffic, it’s also not particularly useful to the visually impaired. Haptic interfaces can help the sighted and the visually impaired alike, but a smart phone really only has one haptic trick – vibration. But a Yale engineer has developed a 3D printed shape-shifting navigation tool that could be a haptics game changer.
Dubbed the Animotus by inventor [Ad Spiers], the device is a hand-held cube split into two layers. The upper layer can swivel left or right and extend or retract, giving the user both tactile and visual clues as to which direction to walk and how far to the goal. For a field test of the device, [Ad] teamed up with a London theater group in an interactive production of the play “Flatland”, the bulk of which was staged in an old church in total darkness. As you can see in the night-vision video after the break, audience members wearing tracking devices were each given an Animotus to allow them to navigate through the interactive sets. The tracking data indicated users quickly adapted to navigation in the dark while using the Animotus, and some became so attached to their device that they were upset by the ending of the play, which involved its mock confiscation and destruction.
Performing art applications aside, there’s plenty of potential for haptics with more than one degree of freedom. Imagine a Bluetooth interface to the aforementioned Google Maps, or an electronic seeing-eye dog that guides a user around obstacles using an Animotus and a camera. There’s still plenty of utility in traditional haptics, though, as this Hackaday Prize semi-finalist shows.
[Roy Shilkrot] and his fellow researchers at the MIT Media Lab have developed the FingerReader, a wearable device that aids in reading text. Worn on the index finger, it receives input from print or digital text and outputs spoken words – and it does this on-the-go. The FingerReader consists of a camera and sensors that detect the text. A series of algorithms the researchers created are used along with character recognition software to create the resulting audio feedback.
There is a lot of haptic feedback built into the FingerReader. It was designed with the visually impaired as the primary user for times when Braille is not practical or simply unavailable. The FingerReader requires the wearer to make physical contact with the tip of their index finger on the print or digital screen, tracing the line. As the user does so, the FingerReader is busy calculating where lines of text begin and end, taking pictures of the words being traced, and converting it to text and then to spoken word. As the user reaches the end of a line of text or begins a new line, it vibrates to let them know. If a user’s finger begins to stray, the FingerReader can vibrate from different areas using two motors along with an audible tone to alert them and help them find their place.
The current prototype needs to be connected to a laptop, but the researchers are hoping to create a version that only needs a smartphone or tablet. The videos below show a demo of the FingerReader. For a proof-of-concept, we are very impressed. The FingerReader reads text of various fonts and sizes without a problem. While the project was designed primarily for the blind or visually impaired, the researchers acknowledge that it could be a great help to people with reading disabilities or as a learning aid for English. It could make a great on-the-go translator, too. We hope that [Roy] and his team continue working on the FingerReader. Along with the Lorm Glove, it has the potential to make a difference in many people’s lives. Considering our own lousy eyesight and family’s medical history, we’ll probably need wearable tech like this in thirty years!