Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Inception object recognizer in a box

DIY Raspberry Neural Network Sees All, Recognizes Some

As a fun project I thought I’d put Google’s Inception-v3 neural network on a Raspberry Pi to see how well it does at recognizing objects first hand. It turned out to be not only fun to implement, but also the way I’d implemented it ended up making for loads of fun for everyone I showed it to, mostly folks at hackerspaces and such gatherings. And yes, some of it bordering on pornographic — cheeky hackers.

An added bonus many pointed out is that, once installed, no internet access is required. This is state-of-the-art, standalone object recognition with no big brother knowing what you’ve been up to, unlike with that nosey Alexa.

But will it lead to widespread useful AI? If a neural network can recognize every object around it, will that lead to human-like skills? Read on. Continue reading “DIY Raspberry Neural Network Sees All, Recognizes Some”

NixieBot Films Your Tweets

[Robin Bussell]’s NixieBot is a mash up of new age electronics and retro vintage components and he’s got a bunch of hacks crammed in there. It’s a Nixie tube clock which displays tweets, takes pictures of the display when it encounters tweets with a #NixieBotShowMe hash tag, and then posts requested pictures back to twitter. If a word is eight characters, it takes a snapshot. If it’s a longer message, NixieBot takes a series of pictures of each word, converts it to an animated GIF, and then posts the tweet. In between, it displays random tweets every twenty seconds. You can see the camera setup in the image below and you should check out the @nixiebot twitter feed to see some of the action.

nixiebot_05For the display, he’s using eight big vintage Burroughs B7971 Nixie Tubes. These aren’t easy to source, and current prices hover around $100 each if you can find them. The 170V DC needed to run each tube comes from a set of six 12V to 170V converter boards specifically designed to drive these tubes. Each board can drive at least a couple of nixies, so [Robin]’s able to use just four boards for the eight tubes. Each nixie is driven by its own “B7971 SmartSocket“, a dedicated PIC16F690 micro-controller board custom designed for the purpose. A serial protocol makes it easy to daisy-chain the SmartSockets to build multi character displays.

Continue reading “NixieBot Films Your Tweets”

A Raspberry Pi Helmet Cam With GPS Logging

20140126_222809-1 Over the last 20 years, [Martin] has been recording snowboarding runs with a standard helmet cam. It was good but he felt like he could improve upon the design by building his own version and logging additional data values like speed, temperature, altitude, and GPS. In the video shown after the break, a first person perspective is displayed with a GPS overlay documenting the paths that were taken through the snow. [Martin] accomplished this by using a python module called picamera to start the video capture and writing the location to a data file. He then modified the program to read the current frame number and sync GPS points to an exact position in the video. MEncoder is used to join the images together into one media file.

The original design was based on the Raspberry Pi GPS Car Dash Cam [Martin] developed a few months earlier. The code in this helmet cam utilizes many of the same functions surrounding the gathering of GPS data points, recording video, and generating the overlay. What made this project different though were the challenges involved. For example, a camera inside a car rarely has to deal with extreme drops in temperature or the wet weather conditions of a snowy mountain. The outside of the vehicle may get battered from the snow, but the camera remains relatively safe from exposure. In order to test the Raspberry Pi before venturing into the cold, [Martin] stuck the computer in the freezer to see what would happen. Luckily it worked perfectly.

Click past the break for the rest of the story.

Continue reading “A Raspberry Pi Helmet Cam With GPS Logging”