The Robot That Lends The Deaf-Blind Community A Hand

The loss of one’s sense of hearing or vision is likely to be devastating in the way that it impacts daily life. Fortunately many workarounds exist using one’s remaining senses — such as sign language — but what if not only your sense of hearing is gone, but you are also blind? Fortunately here, too, a workaround exists in the form of tactile signing, which is akin to visual sign language, except that it uses one’s sense of touch. This generally requires someone who knows tactile sign language to translate from spoken or written forms to tactile signaling. Yet what if you’re deaf-blind and without human assistance? This is where a new robotic system could conceivably fill in.

The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)
The Tatum T1 in use, with a more human-like skin covering the robot. (Credit: Tatum Robotics)

Developed by Tatum Robotics, the Tatum T1 is a a robotic hand and associated software that’s intended to provide this translation function, by taking in natural language information, whether spoken, written or in some digital format, and using a number of translation steps to create tactile sign language as output, whether it’s the ASL format, the BANZSL alphabet or another. These tactile signs are then expressed using the robotic hand, and a connected arm as needed, ideally using ASL gloss to convey as much information as quickly as possible, not unlike with visual ASL.

This also answers the question of why one would not just use a simple braille cell on a hand, as the signing speed is essential to keep up with real-time communications, unlike when, say, reading a book or email. A robotic companion like this could provide deaf-blind individuals with a critical bridge to the world around them. Currently the Tatum T1 is still in the testing phase, but hopefully before long it may be another tool for the tens of thousands of deaf-blind people in the US today.

Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Talking To Alexa With Sign Language

As William Gibson once noted, the future is already here, it just isn’t equally distributed. That’s especially true for those of us with disabilities. [Abishek Singh] wanted to do something about that, so he created a way for the hearing-impaired to use Amazon’s Alexa voice service. He did this using a TensorFlow deep learning network to convert American Sign Language (ASL) to speech and a speech-to-text converter to interpret the response. This all runs on a laptop, so it should work with any voice interface with a bit of tweaking. In particular, [Abishek] seems to have created a custom bit of ASL to trigger Alexa. Perhaps the next step would be to use a robotic arm to create the output directly in ASL and cut out the Echo device completely? [Abishek] has not released the code for this project yet, but he has released the code for other projects, such as Peeqo, the robot that responds with GIFs.

[Via FlowingData and [Belg4mit]]

Continue reading “Talking To Alexa With Sign Language”

Sonar In Your Hand

Sonar measures distance by emitting a sound and clocking how long it takes the sound to travel. This works in any medium capable of transmitting sound such as water, air, or in the case of FingerPing, flesh and bone. FingerPing is a project at Georgia Tech headed by [Cheng Zhang] which measures hand position by sending soundwaves through the thumb and measuring the time on four different receivers. These readings tell which bones the sound travels through and allow the device to figure out where the thumb is touching. Hand positions like this include American Sign Language one through ten.

From the perspective of discreetly one through ten on a mobile device, this opens up a lot of possibilities for computer input while remaining pretty unobtrusive. We see prototypes which are more capable of reading gestures but also draw attention if you wear them on a bus. It is a classic trade-off between convenience and function but this type of reading is unique and could combine with other bio signals for finer results.

Continue reading “Sonar In Your Hand”

3D Printed Robotic Arms For Sign Language

A team of students in Antwerp, Belgium are responsible for Project Aslan, which is exploring the feasibility of using 3D printed robotic arms for assisting with and translating sign language. The idea came from the fact that sign language translators are few and far between, and it’s a task that robots may be able to help with. In addition to translation, robots may be able to assist with teaching sign language as well.

The project set out to use 3D printing and other technology to explore whether low-cost robotic signing could be of any use. So far the team has an arm that can convert text into finger spelling and counting. It’s an interesting use for a robotic arm; signing is an application for which range of motion is important, but there is no real need to carry or move any payloads whatsoever.

Closeup of hand actuators and design. Click to enlarge.

A single articulated hand is a good proof of concept, and these early results show some promise and potential but there is still a long ways to go. Sign language involves more than just hands. It is performed using both hands, arms and shoulders, and incorporates motions and facial expressions. Also, the majority of sign language is not finger spelling (reserved primarily for proper names or specific nouns) but a robot hand that is able to finger spell is an important first step to everything else.

Future directions for the project include adding a second arm, adding expressiveness, and exploring the use of cameras for the teaching of new signs. The ability to teach different signs is important, because any project that aims to act as a translator or facilitator needs the ability to learn and update. There is a lot of diversity in sign languages across the world. For people unfamiliar with signing, it may come as a surprise that — for example — not only is American Sign Language (ASL) related to French sign language, but both are entirely different from British Sign Language (BSL). A video of the project is embedded below.

Continue reading “3D Printed Robotic Arms For Sign Language”

Speech To Sign Language

According to the World Federation of the Deaf, there are around 70 million people worldwide whose first language is some kind of sign language. In the US, ASL (American Sign Language) speakers number from five hundred thousand to two million. If you go to Google translate, though, there’s no option for sign language.

[Alex Foley] and friends decided to do something about that. They were attending McHack (a hackathon at McGill University) and decided to convert speech into sign language. They thought they were prepared, but it turns out they had to work a few things out on the fly. (Isn’t that always the case?) But in the end, they prevailed, as you can see in the video below.

Continue reading “Speech To Sign Language”

Talk To The Glove

Two University of Washington students exercised their creativity in a maker space and created a pair of gloves that won them a $10,000 prize. Obviously, they weren’t just ordinary gloves. These gloves can sense American Sign Language (ASL) and convert it to speech.

The gloves sense hand motion and sends the data via Bluetooth to an external computer. Unlike other sign language translation systems, the gloves are convenient and portable. You can see a video of the gloves in action, below.

Continue reading “Talk To The Glove”