Real Life Subtitle Glasses

[Will Powell] sent in his real-time subtitle glasses project. Inspired by the ever cool Google Project Glass, he decided he would experiment with his own version.

He used two Raspberry Pi’s running Debian squeeze, vuzix glasses, microphones, a tv, ipad, and iphone as the hardware components. The flow of data is kind of strange in this project. The audio first gets picked up by a bluetooth microphone and streamed through a smart device to a server on the network. Once it’s on the server it gets parsed through Microsoft’s translation API. After that the translated message is sent back to a Raspberry Pi where it’s displayed as subtitles on the glasses.

Of course this is far from a universal translation device as seen in Star Trek. The person being translated has to talk clearly into a microphone, and there is a huge layer of complexity. Though, as far as tech demos go it is pretty cool and you can see him playing a game of chess using the system after the break.


15 thoughts on “Real Life Subtitle Glasses

  1. My only real complaint is that the font needs either an outline or a drop shadow to allow the white characters to be seen against a light background. It’s why the captions on DVD players all have it.

    It’s also a little slow, but for doing doing STT and translation in realtime on such relatively underpowered hardware one can’t complain too much.

  2. I wonder if this system would be helpful to deaf people? I mean I guess they get good at lip reading, but Ive never met a person with speech/hearing difficulties.

    A similar text(or ASL) to speech would make it quite useful. Has anyone seen a kinect system that translates ASL to speech?

    1. As a deaf person, yes this sort of thing would be useful. Sony has something like this for movie theaters and Regal Cinemas currently has them in place in some markets. Affordable off the shelf parts into a solution like this is exciting. CART (Communication Access Realtime Translation) is something that currently exists, but glasses like these would expand accessibility. I imagine the speaker using a wireless headset, with a CART provider (think courtroom stenographer) provides the text which could then wirelessly be sent to the glasses. This would be fantastic for walking tours at museums and the like.

      To answer your second question; “No”. The kinect will never translate ASL to speech, there’s not enough detail. The second generation kinect.. might be able to but translating ASL to speech would require as much work and development as speach to text. You could perhaps do a few individual words here and there. Nearly all signs in ASL can be broken down into a finite number of handshapes, location and movements. However as some one who signs as a second language I don’t see automated ASL to text happening any time soon.

  3. A nice project indeed with many uses for sure. It would be even mroe useful if it updated a bit faster.

    As a european though hearing a strong american accent on any other language makes me cringe.

    I’m not putting the girl down, please, accents are hard to get rid of and even harder to disguise. American accents are just extra extra bad. Especially with french :S “Bong-joor, jay wood-ray un bieeeeeeer”? *cringe* Especially the heavy west-american ones.

    I have nothing against American English, only the accents when speaking other languages xD

  4. Wouldn’t it be useful for any one who goes to a “local” location (not in a 5-star hotel lobby) of another country, say at a Korean street market, and be able to converse in a simple but helpful way?

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.