You normally think of smart glasses as something you wear as either an accessory or, if you need a little assistance, with corrective lenses. But [akhilnagori] has a different kind of smart eyewear. These glasses scan and read text in the user’s ear.
This project was inspired by a blind child who enjoyed listening to stories but could not read beyond a few braille books. The glasses perform the reading using a Raspberry Pi Zero 2 W and a machine learning algorithm.
There are several subtasks involved. First, you want to identify each book’s envelope. It wouldn’t do to click on the Joy of Cooking and get information about Remembrance of Things Past.
The next challenge is reading the title of the book. This can be tricky. Fonts differ. The book could be upside down. Some titles go cross the spine, but most go vertically. The remainder of the task is fairly easy. If you know the region and the title, you can easily find a link (for Google Books, in this case) and build an SVG overlay that maps the areas for each book to the right link.
Last time I delivered on this column, I told you about the USPS’ attempts to fully automate a post office. Of course, that’s a bit of a misnomer, since it took 1,500 employees to actually operate the place on a daily basis. Although Project Turnkey in Rhode Island and Project Gateway in California were proving grounds for all kinds of mail sorting and processing equipment, the act of actually reading addresses and routing mail to its final destination still required human intervention and hand coding.
Today, the post office processes hundreds of millions of mail pieces each day using various pieces of equipment. One of those important pieces of equipment is the OCR address reader, which manages to make sense of all kinds of chicken scratch.
Growing up, ours was a family of handwritten notes for every occasion. The majority were left on the kitchen counter next to the sink, or in a particular spot on the all-purpose table in the breakfast nook. Whether one was professing their familial love and devotion on the back of a Valpak coupon, or simply communicating an intent to be home before dinnertime, the words were generally immortalized in BiC on whatever paper was available, and timestamped for the reader’s information. You may have learned cursive in school, but I was born in it — molded by it. The ascenders and descenders betray you because they belong to me.
Both of my parents always seemed to be incapable of printing in anything other than all caps, so I actually preferred to see their cursive most of the time. As a result, I could copy read it quite easily from an early age. Well, I don’t think I ever had any hope of imitating Dad’s signature. But Mom’s on the other hand — like I said in the first installment, it was important for my signature to be distinct from hers, given that we have the same name — first, middle, and last. But I could probably still bust out her signature if it came down to something going on my permanent record.
While my handwriting was sort of naturally headed towards Mom’s, I was more interested in Dad’s style and that of my older brother. He had small caps handwriting down to an art, and my attempts to copy it have always looked angry and stilted by comparison. In addition, my brother’s cursive is lovely and quick, while still being legible.
Unlike probably most people, I enjoy the act of writing by hand — but I’ve always disliked signing my name. Why is that? I think it’s because signatures are supposed to be in cursive, or else they don’t count. At least, that’s what I was taught growing up. (And I’m really not that old, I swear!)
Having the exact same name as my mother meant that it was important to adolescent me to be different, and that included making sure our signatures looked nothing alike. Whereas her gentle, looping hand spoke to her sensitive and friendly nature, my heavy-handed block print was just another way of letting out my teen angst. Sometime in the last couple of decades, my signature became K-squiggle P-squiggle, which is really just a sped-up, screw-you version of my modern handwriting, which is a combination of print and cursive.
D’Nealian print. Notice the ‘monkey tails’ on every possible lowercase letter.
D’Nealian cursive. Notice the stroke order and the ridiculous capital Q.
But let’s back up a bit. I started learning to write in kindergarten, but that of course was in script, with separate letters. Me and my fellow Xennial zeigeistians learned a specific printing method called D’Nealian, which was designed to ease the transition from printing to cursive with its curly tails on every letter.
We practiced our D’Nealian (So fancy! So grown-up!) on something called Zaner-Bloser paper, which is still used today, and by probably second grade were making that transition from easy Zorro-like lowercase Zs to the quite mature-looking double-squiggle of the cursive version. It was as though our handwriting was moving from day to night, changing and moving as fast as we were. You’d think we would have appreciated learning a way of writing that was more like us — a blur of activity, everything connected, an oddly-modular alphabet that was supposed to serve us well in adulthood. But we didn’t. We hated it. And you probably did, too.
It’s one of those things that certainly sounds simple enough: take a picture of a receipt, run it through optical character recognition (OCR), and send the resulting information to whatever expense-tracking website or software you wish. There are companies that offer such a service, so it can’t be too difficult to replicate on your own…right?
That’s what [Marcel Robitaille] thought when he set out to create his homebrew “Receipt Ingestion” system, anyway. But in reality it took so much time to troubleshoot and implement that he says it would have been faster to just enter in all his receipts by hand. We’re happy he stuck with it though, otherwise you wouldn’t be reading about it on Hackaday, and we wouldn’t be able to learn anything from the detailed account he’s provided.
It only took an evening to hack together a rough demo, and the initial results were very promising. The code could detect the edges of the receipt, rotate the captured image appropriately, and then pull out the critical information such as date, total amount, business name, etc. He was then able to decipher the API for Splitwise, an online service for splitting bills, by capturing the data sent by his browser while adding a new bill. With this information, writing up some Python code to push his captured data into the service was trivial. So far, so good.
But like so many horror films that begin with a happy family starting a new life in a beautiful home, there was a monster lurking in the shadows. It’s one thing to capture data from perfectly clean and flat receipts, but quite another to get any useful info out of one that spent half the day crumpled up in your back pocket. The promising proof of concept that worked a treat under controlled conditions failed completely in the real-world, with [Marcel] reporting that only 1 in 5 receipts he tried to scan actually went through.
In the end, [Marcel] realized that the best way to handle the unreliable condition of the receipts was to focus on a different object in the image. He came up with a QR code marker that he could put on the table with the receipt to be scanned, which his software can use as a known point of reference. This greatly improves the reliability of the image rotation and transformation, which in turn makes the OCR more reliable. It also makes it much easier to tell which images need to be scanned — if there’s no QR code found, the software just skips that shot and keeps looking.
The unique challenges of digitizing large amounts of printed content using OCR makes for some fascinating problem solving, and we’re glad [Marcel] shared this particular story with us. While there’s still some edge cases that need chasing down, he’s using the software on a nearly daily basis, and has posted it up on GitHub for anyone who might wish to build on his efforts.
[Kevin Norman] got himself a smart body scale with the intention of logging data for his own analysis, but discovered that extracting data from the device was anything but easy. It turns out that the only way to access data from his scale is by viewing it in a mobile app. Screen-scraping is a time-honored method of pulling data from uncooperative systems, so [Kevin] committed to regularly taking a full-height screenshot from the app and using optical character recognition (OCR) to get the numbers, but making that work was a surprisingly long process full of dead ends.
First of all, while OCR can be reliable, it needs the right conditions. One thing that ended up being a big problem was the way the app appends units (kg, %) after the numbers. Not only are they tucked in very close, but they’re about half the height of the numbers themselves. It turns out that mixing and matching character height, in addition to snugging them up against one another, is something tailor-made to give OCR reliability problems.
The solution for this particular issue came from an unexpected angle. [Kevin] was using an open-source OCR program called Tesseract, and joined an IRC community #tesseract to ask for advice after exhausting his own options. The bemused members of the online community informed [Kevin] that they had nothing to do with OCR; #tesseract was actually a community for an open-source 3D FPS shooter of the same name. But as luck would have it, one of the members actually had OCR experience and suggested the winning approach: pre-process the image with OpenCV, using cv2.findContours() to detect and create a bounding box around each element. If an element is taller than a decimal point but shorter than everything else, throw it out. With that done, there were still a few more tweaks required, but the finish line was finally in sight.
Now [Kevin] can use the scale in the morning, take a screenshot, and in less than half a minute the results are imported into a database and visualizations generated. The resulting workflow might look like something Rube Goldberg would approve of, but it works!