Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Speech To Sign Language

According to the World Federation of the Deaf, there are around 70 million people worldwide whose first language is some kind of sign language. In the US, ASL (American Sign Language) speakers number from five hundred thousand to two million. If you go to Google translate, though, there’s no option for sign language.

[Alex Foley] and friends decided to do something about that. They were attending McHack (a hackathon at McGill University) and decided to convert speech into sign language. They thought they were prepared, but it turns out they had to work a few things out on the fly. (Isn’t that always the case?) But in the end, they prevailed, as you can see in the video below.

Continue reading “Speech To Sign Language”

Talk To The Glove

Two University of Washington students exercised their creativity in a maker space and created a pair of gloves that won them a $10,000 prize. Obviously, they weren’t just ordinary gloves. These gloves can sense American Sign Language (ASL) and convert it to speech.

The gloves sense hand motion and sends the data via Bluetooth to an external computer. Unlike other sign language translation systems, the gloves are convenient and portable. You can see a video of the gloves in action, below.

Continue reading “Talk To The Glove”

Sign And Speak Glove

This wire covered glove is capable of turning your hand gestures to speech, and it does so wirelessly. The wide range of sensors include nine flex sensors, four contact sensors, and an accelerometer. The flex sensors do most of the work, monitoring the alignment of the wearer’s finger joints. The contact sensors augment the flex sensor data, helping to differentiate between letters that have similar finger positions. The accelerometer is responsible for decoding movements that go along with the hand positions. They combine to detect all of the letters in the American Sign Language alphabet.

An ATmega644 monitors all of the sensors, and pushes data out through a wireless transmitter. MATLAB is responsible for collecting the data which is coming in over the wireless link. It saves it for later analysis using a Java program. Once the motions have been decoded into letters, they are assembled into sentences and fed into a text-to-speech program.

You’ve probably already guess that there’s a demo video after the break.

Continue reading “Sign And Speak Glove”