A team of Cornell students recently built a prototype electronic glove that can detect sign language and speak the characters out loud. The glove is designed to work with a variety of hand sizes, but currently only fits on the right hand.
The glove uses several different sensors to detect hand motion and position. Perhaps the most obvious are the flex sensors that cover each finger. These sensors can detect how each finger is bent by changing the resistance according to the degree of the bend. The glove also contains an MPU-6050 3-axis accelerometer and gyroscope. This sensor can detect the hand’s orientation as well as rotational movement.
While the more high-tech sensors are used to detect most characters, there are a few letters that are similar enough to trick the system. Specifically, they had trouble with the letters R, U, and V. To get around this, the students strategically placed copper tape in several locations on the fingers. When two pieces of tape come together, it closes a circuit and acts as a momentary switch.
The sensor data is collected by an ATmega1284p microcontroller and is then compiled into a packet. This packet gets sent to a PC which then does the heavy processing. The system uses a machine learning algorithm. The user can train the it by gesturing for each letter of the alphabet multiple times. The system will collect all of this data and store it into a data set that can then be used for detection.
This is a great project to take on. If you need more inspiration there’s a lot to be found, including another Cornell project that speaks the letters you sign, as well as this one which straps all needed parts to your forearm.
Continue reading “Electronic Glove Detects Sign Language”
Some people know [Tom Murphy] as [Dr. Tom Murphy VII Ph.D.] and this hack makes it obvious that he earned those accolades. He decided to see if he could teach a computer to win at Super Mario Bros. But he went about it in a way that we’d bet is different that 99.9% of readers would first think of. The game doesn’t care about Mario, power-ups, or really even about enemies. It’s simply looking at the metrics which indicate you’re doing well at the game, namely score and world/level.
The link above includes his whitepaper, but we think you’ll want to watch the 16-minute video (after the break) before trying to tackle that. In the clip he explains the process in laymen’s terms which so far is the only part we really understand (hence the reference to voodoo in the title). His program uses heuristics to assemble a set of evolving controller inputs to drive the scores ever higher. In other words, instead of following in the footstep of Minesweeper solvers or Bejeweled Blitz bots which play as a human would by observing the game space, his software plays the game over and over, learning what combinations of controller inputs result in success and which do not. The image to the right is a graph of it’s learning progress. Makes total sense, huh?
Continue reading “Teaching a computer to play Mario… seemingly through voodoo”
Ever since his daughter was born, [Markus] has been keeping logs full of observations of human behavior. Despite how it sounds, this sort of occurrence isn’t terribly odd; the field of developmental psychology is filled with research of this sort. It’s what [Markus] is doing with this data that makes his project unique. He’s attempting to use stochastic learning to model the behavior of his daughter and put her mind in a robot. Basically, [Markus] is building a robotic version of his newborn daughter.
The basics of stochastic learning (PDF with more info) is that a control system is modeled on an existing system – in this case, a baby – by telling a robot if it is doing a good or bad job. Think of it as classical conditioning for automatons that can only respond to a 1 or 0.
[Markus] built a robotic platform based on an Arduino Mega and a few ultrasonic distance sensors. By looking at its surrounding environment, the robot makes judgments as to what it should do next. In the video after the break, [Markus] shows off his robot finding its way around an obstacle course – really just a pair of couch cushions.
It’s a long way from crawling around on all fours, paying attention to shiny things, and making a complete mess of everything, but we’re loving [Markus]’ analytical approach to creating a rudimentary artificial intelligence.
Continue reading “Have a baby? Build another one!”
This talk from the 2012 LayerOne conference outlines how the team build Stiltwalker, a package that could beat audio reCAPTCHA. We’re all familiar with the obscured images of words that need to be typed in order to confirm that you’re human (in fact, there’s a cat and mouse game to crack that visual version). But you may not have noticed the option to have words read to you. That secondary option is where the toils of Stiltwalker were aimed, and at the time the team achieved 99% accurracy. We’d like to remind readers that audio is important as visual-only confirmations are a bane of visually impaired users.
This is all past-tense. In fact, about an hour before the talk (embedded after the break) Google upgraded the system, making it much more complex and breaking what these guys had accomplished. But it’s still really fun to hear about their exploit. There were only 58 words used in the system. The team found out that there’s a way to exploit the entry of those word, misspelling them just enough so that they would validate as any of up to three different words. Machine learning was used to improve the accuracy when parsing the audio, but it still required tens of thousands of human verifications before it was reliably running on its own.
Continue reading “Stiltwalker beat audio reCAPTCHA”
Need some gears? Got a timing belt?
[filespace] sent in a neat build he stumbled upon: making gears with plywood and a timing belt. Just cut out a plywood disk and glue on a section of timing belt. There’s some math involved in getting all the teeth evenly placed around the perimeter, but nothing too bad. Also useful for wheels, we think.
Huge chess sets are cool, right up until you try to figure out where to store the pieces when they’re not being used. [Jayefuu] came up with a neat solution to this problem. His pieces are cut out of coroplast (that corrugated plastic stuff political campaign signs are made of), making it relatively inexpensive and just as fun as normal giant chess pieces on a tile floor.
<INSERT MARGINALLY RELEVANT PORTAL QUOTE HERE>
[Randy]’s son is in the cub scouts. Being the awesome father he is, [Randy] helped out with this year’s pinewood derby build. It’s a car shaped like a portal gun with the obligatory color-changing LED. The car won the ‘Can’t get more awesome’ award, but wheel misalignment kicked it out of the competition during qualifying rounds. Sad, that. Still awesome, though.
These people are giving you tools for free
Caltech professor [Yaser Abu-Mostafa] is teaching a Machine Learning class this semester. You can take this class as well, even if the second lecture started last Thursday.
Turning an Arduino into a speech synthesizer
[AlanFromJapan] sent in this product page for an Arduino-powered speech synthesizer. We’re probably looking at a relabeled ATmega328 with custom firmware here; to use it, you replace the micro in your Arduino Uno with this chip. The chip goes for about $10 USD here, so we’ll give it a week until someone has this proprietary firmware up on the Internet. There are English morphemes that aren’t in Japanese, so you can’t just ‘type in English’ and have it work. Here’s a video.
Six things in this links post. We’re feeling generous.
What would you build if you had a laser cutter? [Doug Miller] made a real, working fishing reel. No build log or files, but here’s a nice picture.
After taking the Stanford Machine Learning class offered over the Internet last year, [David Singleton] thought he could build something really cool. We have to admit that he nailed it with his neural network controlled car. There’s not much to the build; it’s just an Android phone, an Arduino and a toy car. The machine learning part of this build really makes it special.
A neural network takes a whole bunch of inputs and represents them as a node in a network. Each node in [Davids]’s input layer corresponds to a pixel retrieved from his phone’s camera. All the inputs of the input layer are connected to 64 nodes in the ‘hidden layer’. The nodes in the hidden layer are connected to the four output nodes, namely left, right, forward and reverse.
After training the network and weighting all the connections, [David] got a toy car to drive around a track. Weird, but it works. All the code is up on github, so feel free to take a look behind the inner machinations of a neural net. Of course, you could check out the video of [David]’s car in action after the break.
EDIT: We originally credited [icebrain] as the author. Our bad, and we hope [David] doesn’t hate us now.
Continue reading “Neural networks control a toy car”