Students Set Sights on DIY Eye Exams

What if you could give yourself a standard eye exam at home? That’s the idea behind [Joel, Margot, and Yuchen]’s final project for [Bruce Land]’s ECE 4760—simulating the standard Snellen eye chart that tests visual acuity from an actual or simulated distance of 20 feet.

This test is a bit different, though. Letters are presented one by one on a TFT display, and the user must identify each letter by speaking into a microphone. As long as the user guesses correctly, the system shows smaller and smaller letters until the size equivalent to the 20/20 line of the Snellen chart is reached.

Since the project relies on speech recognition, the group had to consider things like background noise and the differences in human voices. They use a bandpass filter to screen out frequencies that fall outside the human vocal range. In order to determine the letter spoken, the PIC32 collects the first 256 and last 256 samples, stores them in two arrays, and performs FFT on the first set. The second set of samples undergoe Mel transformation, which helps the PIC assess the sample logarithmically. Finally, the system determines whether it should show a new letter at the same size, a new letter at a smaller size, or end the exam.

While this is not meant to replace eye exams done by certified professionals, it is an interesting project that is true to the principles of the Snellen eye chart. The only thing that might make this better is an e-ink display to make the letters crisp. We’d like to see Snellen’s tumbling E chart implemented as well for children who don’t yet know the alphabet, although that would probably require a vastly different input method. Be sure to check out the demonstration video after the break.

Don’t know who [Bruce Land] is? Of course he’s an esteemed Senior Lecturer at Cornell University. But he’s also extremely active on Hackaday.io, has many great embedded engineering lectures you can watch free-of-charge, and every year we look forward to seeing the projects — like this one — dreamed and realized by his students. Do you have final projects of your own to show off? Don’t be shy about sending in a tip!

Continue reading “Students Set Sights on DIY Eye Exams”

Voice Command with No Echo

[Naran] was intrigued with the Amazon Echo’s ability to control home electronics, but decided to roll his own. By using a Raspberry Pi with the beta Prota OS, he managed to control some Phillips Hue bulbs and a homebrew smart outlet.

Prota has a speech application, which made the job simpler. He does point out though, that his project doesn’t replace the Echo’s ability to answer questions by searching the Internet. The advantage, though, is it is easily tailored to your specific application. Also, if you have a Raspberry Pi hanging around, you can’t beat the price. Continue reading “Voice Command with No Echo”

Phone Gyroscope Signals Can Eavesdrop on Your Conversations

A gyroscope is a device made for measuring orientation and can typically be found in modern smartphones or tablet PCs to enable rich user experience. A team from Stanford managed to recognize simple words from only analyzing gyroscope signals (PDF warning). The complex inner workings of MEMS based gyroscopes (which use the Coriolis effect) and Android software limitations only allowed the team to only sniff frequencies under 200Hz. This may therefore explain the average 12% word recognition rate that was achieved with custom recognition algorithms. It may however still be enough to make you reconsider installing an app that don’t necessarily need access to the on-board sensors to work. Interestingly, the paper also states that STMicroelectronics currently have a 80% market share for smartphone / Tablet PCs gyroscopes.

On the same topic, you may be interested to check out a gyroscope-based smartphone keylogging attack we featured a couple of years ago.

An Arduino With Better Speech Recognition Than Siri

The lowly Arduino, an 8-bit AVR microcontroller with a pitiful amount of RAM, terribly small Flash storage space, and effectively no peripherals to speak of, has better speech recognition capabilities than your Android or iDevice.  Eighty percent accuracy, compared to Siri’s sixty.Here’s the video to prove it.

This uSpeech library created by [Arjo Chakravarty] uses a Goertzel algorithm to turn input from a microphone connected to one of the Arduino’s analog pins into phonemes. From there, it’s relatively easy to turn these captured phonemes into function calls for lighting a LED, turning a servo, or even replicating the Siri, the modern-day version of the Microsoft paperclip.

There is one caveat for the uSpeech library: it will only respond to predefined phrases and not normal speech. Still, that’s an extremely impressive accomplishment for a simple microcontroller.

This isn’t the first time we’ve seen [Arjo]’s uSpeech library, but it is the first time we’ve seen it in action. When this was posted months and months ago, [Arjo] was behind the Great Firewall of China and couldn’t post a proper demo. Since this the uSpeech library is a spectacular achievement we asked for a few videos showing off a few applications. No one made the effort, so [Arjo] decided to make use of his new VPN and show off his work to the world.

Video below.

Continue reading “An Arduino With Better Speech Recognition Than Siri”

Raspberry Pi Becomes a Universal Translator

hola-me-nombre-david-conroy

We’re still about 150 years away from the invention of the universal translator by [Lt Cdr Sato] of the Enterprise NX-01, but [Dave] has something that’s almost as good: a speech recognition, translation, and text to speech setup for the Raspberry Pi that theoretically allows anyone to speak in sixty different languages.

After setting up all the Linux audio cruft, [Dave] digs in and starts on converting the guttural vocalizations of a meat speaker into something Google’s speech to text service can understand. From there, it’s off to Google again, this time converting text in one language into the writings of another.

[Dave]’s end result is a shell script that works reasonably well for something that won’t be invented for another 150 years. The video below shows the script successfully translating English to spanish, but it should work equally well with other languages such as dutch and latin, as well as less popular language such as esperanto and french.

Continue reading “Raspberry Pi Becomes a Universal Translator”

Speech recognition on an Arduino

Speech recognition is usually the purview of fairly high-powered computers chugging along at hundreds of Megahertz with megabytes of RAM. Bringing speech recognition to the low-power microcontroller you’d find in an Arduino sounds like the work of a mad scientist or Ph.D. candidate, but that’s exactly what [Arjo Chakravarty] did. He developed the μSpeech library for the Arduino to allow for speech recognition for a limited set of voice commands.

Where most speech recognition systems use FFT and very fancy math to determine what phonemes a user is saying, [Arjo]’s system does away with this unnecessary complexity in favor of using very, very basic integral and differential calculus.

From [Arjo]’s user guide for μSpeech (PDF warning) we can see it’s possible to connect a small microphone to the analog input of an Arduino and accept voice commands such as ‘left’, ‘right’, and ‘stop’. The accuracy is pretty good, as well – 80% if μSpeech is trying to recognize words, and 30-40% if μSpeech is programmed to recognize single phonemes.

Sadly we couldn’t find a demo video of μSpeech in action, but you’re more than welcome to grab it via github for your own project. Send us a video of μSpeech in action and we’ll put it up.

Sorting resistors with speech recognition

If you’ve ever had to organize a bunch of resistors, you’ll know why [Anthony] created EESpeak. It’s a voice-controlled component look up tool that calculates a component value by listening to you read out color code bands.

In his demo video of EESpeak, [Anthony] reads off the color bands of several resistors whilst the program dutifully calculates and displays the value. [Anthony] also included support for calculating the value of capacitors and inductors by speaking the color bands, as well as EIA-96 codes for SMD parts.

In addition to taking speech input and flashing a component value on the screen, EESpeak also has a text-to-speech function that will tell you what a component without ever having to look at your monitor.

Even though the text-to-speech function seems a little cumbersome – it takes much longer for a computer to speak a value than to display it on the screen – using voice recognition to calculate component values is an awesome idea. With an extremely limited vocabulary the computer has to understand, the error rate of EESpeak is probably very low.

You can check out [Anthony]’s demo video after the break, and of course download the app on his blog.

Continue reading “Sorting resistors with speech recognition”