There was a time when you saw someone walking down the street talking to no one, they were probably crazy. Now you have to look for a Bluetooth headset. But soon they may just be quietly talking to their glasses. Cornell University researchers have EchoSpeech which use sonar-like sensors in a pair of glasses to watch your lips and mouth move. From that data, they can figure out what you are saying, even if you don’t really say it out loud. You can see a video of the glasses below.
There are a few advantages to a method like this. For one thing, you can speak commands even in places where you can’t talk out loud to a microphone. There have been HAL 9000-like attempts to read lips with cameras, but this is power-hungry and video tends to be data intensive.
By comparison, the EchoSpeech uses low-power speakers and transducers to silently collect a modest amount of data. In addition to convenience, this tech could be a real breakthrough for people who can’t speak for some reason but can move their lips and mouth.
We often wondered if Star Trek-style voice command would be a pain in a 25th-century cube farm. EchoSpeech could solve this problem since you don’t actually speak out loud.
Google Glass wasn’t very successful, but this might be viable for some users. Even better if integrated with some test equipment. These would be much simpler to hack together than a Google Glass replacement too, and we’ve seen some simple head-mounted gear that was actually useful already.
Why does that girl look like she’s being held against her will??
If the article had been about Google Glass, it would be obvious.
B^)
Perhaps the ST lidar arrays (also tiny, low power and I2C bandwidth) would be a good compromise between this and a video camera?
So cool, thanks for sharing this.