You Won’t Hear This Word On The Street

The simplest answer to a problem is not necessarily always the best answer. If you ask the question, “How do I get a voice assistance to work on a crowded subway car?”, the simplest answer is to shout into a microphone but we don’t want to ask Siri to put toilet paper on the shopping list in front of fellow passengers at the top of our lungs. This is “not a technical issue but a mental issue” according to [Masaaki Fukumoto], lead researcher at Microsoft in “hardware and devices” and “human-computer interaction.” SilentVoice was demonstrated in Berlin at the ACM Symposium on User Interface Software and Technology which showed a live transcription of nearly silent speech. A short demonstration can be found below the break.

SilentVoice relies on a different way of speaking and a different way of picking up that sound. Instead of traditional dictation in which we exhale while facing a microphone, it is necessary to place the microphone less than two millimeters from the mouth, usually against the lips, and use ingressive speech which is just whispering while inhaling. The advantage of ingressive over egressive speech is that without air being blown over the microphone, the popping of air gusts is eliminated. With practice, it is as efficient as normal speaking but that practice will probably involve a few dizzy spells from inhaling more than necessary.

Continue reading “You Won’t Hear This Word On The Street”

Speech Recognition Without A Voice

The biggest change in Human Computer Interaction over the past few years is the rise of voice assistants. The Siris and Alexas are our HAL 9000s, and soon we’ll be using these assistants to open the garage door. They might just do it this time.

What would happen if you could talk to these voice assistants without saying a word? Would that be telepathy? That’s exactly what [Annie Ho] is doing with Cerebro Voice, a project in this year’s Hackaday Prize.

At its core, the idea behind Cerebro Voice is based on subvocal recognition, a technique that detects electrical signals from the vocal cords and other muscles involved in speaking. These electrical signals are collected by surface EMG devices, then sent to a computer for processing and reconstruction into words. It’s a proven technology, and even NASA is calling it ‘synthetic telepathy’.

The team behind this project is just in the early stages of prototyping this device, and so far they’re using EMG hardware and microphones to train a convolutional neural network that will translate electrical signals into a user’s inner monologue. It’s an amazing project, and one of the best we’ve seen in the Human Computer Interface challenge in this year’s Hackaday Prize.