Smart Glasses Read Text

You normally think of smart glasses as something you wear as either an accessory or, if you need a little assistance, with corrective lenses. But [akhilnagori] has a different kind of smart eyewear. These glasses scan and read text in the user’s ear.

This project was inspired by a blind child who enjoyed listening to stories but could not read beyond a few braille books. The glasses perform the reading using a Raspberry Pi Zero 2 W and a machine learning algorithm.

The original software developed took place on a Windows machine using WSL to simplify portability to the Linux-based Raspberry Pi board.

The frame is 3D printed, of course. Mounting the CPU, a camera, and a battery, along with a DC to DC converter, is fairly trivial. The real heavy lifting is in the software. The glasses snap a picture every ten seconds. It might be interesting to add a button or other means to let the user trigger a scan.

Of course, you could build something similar to run on just about any device with a camera and Python. It would be easy, for example, to put something in a hand-held format.

OCR is a readily solved problem. There are commercial smart glasses that look nice, and we wonder if any will have similar apps for them.

6 thoughts on “Smart Glasses Read Text

  1. Hmm there seems to be a disconnect between what’s been described and what actually was made, did the writer actually read and look at the instructables post? Those glasses were not completed and the image itself shows just 3d printed frames. It’s a very interesting project concept for sure but its just not an accurate write up and description by the instructables poster or the HaD writer.

    1. Why are you saying this? Nothing in the Instructable indicates the project wasn’t finished or doesn’t work. Source code and STLs are provided.

      Just because there isn’t a picture of the completed build, you assume it’s fake?

      1. Well, it says as much in the text, for accurate picture taking you would need a pushbutton and there was no time to add that in the project. It still is a nice idea and it’s good they did a write up, but to be useful for the intended people i guess it needs a lot of iterations to be of real practical value.

  2. Every 10 seconds? We can speed that up. This is just a prototype, but the visionally impaired will see and the deaf and people who don’t get sarcasm will eventually be cured.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.