16 thoughts on “Trace Your Book Or Kindle With The FingerReader

      1. Addendum: Mind you you could as the video tries to portrait find the edge of a book and find the edge of paper, but that ignores pages where there isn’t text from top to bottom and presumes blind people have a bookcase with books.
        And don’t e-readers already have text-to-speech for the blind?

        But I have an idea, this might also help the deaf to learn to read, I hear it’s very hard for deaf people to learn to read so if you can adapt this somehow as an teaching aid where it shows the sign language version of words they point at it might make it easier to pick up reading skills.

      2. From TFA:
        “If a user’s finger begins to stray, the FingerReader can vibrate from different areas using two motors along with an audible tone to alert them and help them find their place.”

      3. Uhm, blind people have more sense than you think. All they have to do is feel for the edge of the page and start tracing. They already know to read starting from the left and move right. They already have a keen sense of hearing, probably keener than yours. They’re already accustomed to depending on other vision than their own (i.e. cues from their dogs and other people). If I am ever blind, I want this. I’m not concerned at all that I would have trouble finding the text in a book. It might take some practice, to be sure.

      1. Stroke victims will never read. Most stroke survivors knew how to read prior to their stroke. Those whose stroke occurred in the right. Those whose stroke affected hemisphere of their brain may have difficulty in tracking the lines, but they can read the words just fine. Those whose stroke whose stroke affected the left hemisphere of their brain may have to deal with aphasia, a communications disorder. This may aide those suffering from aphasia I don’t know, but I can’t see it helping those whose right hemisphere stroke left them with tracking difficulties. I’m 21 years post stroke, with luck being relative I don’t suffer from aphasia.

  1. commendable goal, cute prototype, but ultimate effect is a big fat FAIL
    Why bother with finger tracing when you can take one photo of whole object and read it using NLP algorithms to achieve seamless sentences and context aware error correction?

    1. This is what I thought. The only benefit I can see to this is if it were used with really odd layout text, signs, labels etc. but then you’re back to the “how do they know where to point” question. Perhaps it could work to read shelf labels in the supermarket?

      But it could be useful to help kids lean to read?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.