Get Your Tweets Without Looking

Head-mounted displays range from cumbersome to glass-hole-ish. Smart watches have their niche, but they still take your eyes away from whatever you are doing, like driving. Voice assistants can read to you, but they require a speaker that everyone else in the car has to listen to, or a headset that blocks out important sound. Ignoring incoming messages is out of the question so the answer may be to use a different sense than vision. A joint project between Facebook Inc. and the Massachusetts Institute of Technology have a solution which uses the somatosensory reception of your forearm.

A similar idea came across our desk years ago and seemed promising, but it is hard to sell something that is more difficult than the current technique, even if it is advantageous in the long run. In 2013, a wearer had his or her back covered in vibrator motors, and it acted like the haptic version of a spectrum analyzer. Now, the vibrators have been reduced in number to fit under a sleeve by utilizing patterns. It is being developed for people with hearing or vision impairment but what drivers aren’t impaired while looking at their phones?

Patterns are what really set this version apart. Rather than relaying a discrete note on a finger, or a range of values across the back, the 39 English phenomes are given a unique sequence of vibrations which is enough to encode any word. A phenome phoneme is the smallest distinct unit of speech. The video below shows how those phonemes are translated to haptic feedback. Hopefully, we can send tweets without using our hands or mouths to upgrade to complete telepathy.

Thank you for the tip, [Qes].

5 thoughts on “Get Your Tweets Without Looking

  1. I don’t get it. Are the buzzes supposed to “feel” like different sounds, so you can “hear” words right off the bat, or do you have to go through training, “that buzz means AY”, “this buzz means EE”?

  2. This reminds me of a voice synthesizer that i build ages ago for the Apple ][ with a chip from Tandy, http://www.futurebots.com/spo256.pdf With 64 phonemes (allophones according to the data sheet) it could pronounce most words (examples are in the data sheet). I build a number guessing game and can still remember how it said “50 is much to low” in it’s completely inhuman voice ;)

Leave a Reply to Brian McEvoy Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.