Extracting Data From Smart Scale Gives Rube Goldberg A Run For His Money

[Kevin Norman] got himself a smart body scale with the intention of logging data for his own analysis, but discovered that extracting data from the device was anything but easy. It turns out that the only way to access data from his scale is by viewing it in a mobile app. Screen-scraping is a time-honored method of pulling data from uncooperative systems, so [Kevin] committed to regularly taking a full-height screenshot from the app and using optical character recognition (OCR) to get the numbers, but making that work was a surprisingly long process full of dead ends.

First of all, while OCR can be reliable, it needs the right conditions. One thing that ended up being a big problem was the way the app appends units (kg, %) after the numbers. Not only are they tucked in very close, but they’re about half the height of the numbers themselves. It turns out that mixing and matching character height, in addition to snugging them up against one another, is something tailor-made to give OCR reliability problems.

The solution for this particular issue came from an unexpected angle. [Kevin] was using an open-source OCR program called Tesseract, and joined an IRC community #tesseract to ask for advice after exhausting his own options. The bemused members of the online community informed [Kevin] that they had nothing to do with OCR; #tesseract was actually a community for an open-source 3D FPS shooter of the same name. But as luck would have it, one of the members actually had OCR experience and suggested the winning approach: pre-process the image with OpenCV, using cv2.findContours() to detect and create a bounding box around each element. If an element is taller than a decimal point but shorter than everything else, throw it out. With that done, there were still a few more tweaks required, but the finish line was finally in sight.

Now [Kevin] can use the scale in the morning, take a screenshot, and in less than half a minute the results are imported into a database and visualizations generated. The resulting workflow might look like something Rube Goldberg would approve of, but it works!

Raspberry Pi Reads What It Sees, Delights Children

[Geyes30]’s Raspberry Pi project does one thing: it finds arbitrary text in the camera’s view and reads it out loud. Does it do so flawlessly? Not really. Was it at least effortless to put together? Also no, but it does wonderfully illustrate the process of gluing together different bits of functionality to make something new. Also, [geyes30]’s kids find it fascinating, and that’s a win all on its own.

The device is made from a Raspberry Pi and camera and works by sending a still image from the camera to an optical character recognition (OCR) program, which converts any visible text in the image to its ASCII representation. The recognized text is then piped to the espeak engine and spoken aloud. Getting all the tools to play nicely took a bit of work, but [geyes30] documented everything so well that even a novice should be able to get the project up and running in an afternoon.

Sometimes a function like text-to-speech is an end result in and of itself. This was also true of another similar project: Magic Mirror, whose purpose was to tirelessly indulge children’s curiosity about language.

Seeing other projects come to life and learning about new tools is a great way to get new ideas, and documenting them helps cross-pollinate among creative types. Did something inspire you recently, or have you documented your own project? We want to hear about it and so do others, so let us know via the tips line!

Continue reading “Raspberry Pi Reads What It Sees, Delights Children”

Soil Sensor Shows Flip-Dots Aren’t Just For Signs

Soil sensors are handy things, but while sensing moisture is what they do, how they handle that data is what makes them useful. Ensuring usefulness is what led [Maakbaas] to design and create an ESP32-based soil moisture sensor with wireless connectivity, deep sleep, data logging, and the ability to indicate that the host plant needs watering both visually, and with a push notification to a mobile phone.

A small flip-dot indicator makes a nifty one-dot display that requires no power when idle.

The visual notification part is pretty nifty, because [Maakbaas] uses a small flip-dot indicator made by Alfa-Zeta. This electromechanical indicator works by using two small coils to flip a colored disk between red or green. It uses no power when idle, which is a useful feature for a device that spends most of its time in a power-saving deep sleep. When all is well the indicator is green, but when the plant needs water, the indicator flips to red.

The sensor itself wakes itself up once per hour to take a sensor measurement, which it then stores in a local buffer for uploading to a database every 24 measurements. This reduces the number of times the device needs to power up and connect via WiFi, but if the sensor ever determines that the plant requires water, that gets handled immediately.

The sensor looks great, and a 3D-printed enclosure helps keep it clean while giving the device a bit of personality. Interested in rolling your own sensor? The project also has a page on Hackaday.io and we’ve previously covered in-depth details about how these devices work. Whether you are designing your own solution or using existing hardware, just remember to stay away from cheap probes that aren’t worth their weight in potting soil.

Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders

The eyes are windows into the mind, and this research into what jumping spiders look at and why required a clever device that performs eye tracking, but for jumping spiders. The eyesight of these fascinating creatures in some ways has a lot in common with humans. We both perceive a wide-angle region of lower visual fidelity, but are capable of directing our attention to areas of interest within that to see greater detail. Researchers have been able to perform eye-tracking on jumping spiders, literally showing exactly where they are looking in real-time, with the help of a custom device that works a little bit like a miniature movie theatre.

A harmless temporary adhesive on top (and a foam ball for a perch) holds a spider in front of a micro movie projector and IR camera. Spiders were not harmed in the research.

To do this, researchers had to get clever. The unblinking lenses of a spider’s two front-facing primary eyes do not move. Instead, to look at different things, the cone-shaped inside of the eye is shifted around by muscles. This effectively pulls the retina around to point towards different areas of interest. Spiders, whose primary eyes have boomerang-shaped retinas, have an X-shaped region of higher-resolution vision that the spider directs as needed.

So how does the spider eye tracker work? The spider perches on a tiny foam ball and is attached — the help of a harmless and temporary adhesive based on beeswax — to a small bristle. In this way, the spider is held stably in front of a video screen without otherwise being restrained. The spider is shown home movies while an IR camera picks up the reflection of IR off the retinas inside the spider’s two primary eyes. By superimposing the IR reflection onto the displayed video, it becomes possible to literally see exactly where the spider is looking at any given moment. This is similar in some ways to how eye tracking is done for humans, which also uses IR, but watches the position of the pupil.

In the short video embedded below, if you look closely you can see the two retinas make an X-shape of a faintly lighter color than the rest of the background. Watch the spider find and focus on the silhouette of a tasty cricket, but when a dark oval appears and grows larger (as it would look if it were getting closer) the spider’s gaze quickly snaps over to the potential threat.

Feel a need to know more about jumping spiders? This eye-tracking research was featured as part of a larger Science News article highlighting the deep sensory spectrum these fascinating creatures inhabit, most of which is completely inaccessible to humans.

Continue reading “Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders”

RC car without a top, showing electronics inside.

Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road

[Andy]’s robot is an autonomous RC car, and he shares the localization algorithm he developed to help the car keep track of itself while it zips crazily around an indoor racetrack. Since a robot like this is perfectly capable of driving faster than it can sense, his localization method is the secret to pouring on additional speed without worrying about the car losing itself.

The regular pattern of ceiling lights makes a good foundation for the system to localize itself.

To pull this off, [Andy] uses a camera with a fisheye lens aimed up towards the ceiling, and the video is processed on a Raspberry Pi 3. His implementation is slick enough that it only takes about 1 millisecond to do a localization update, netting a precision on the order of a few centimeters. It’s sort of like a fast indoor GPS, using math to infer position based on the movement of ceiling lights.

To be useful for racing, this localization method needs to be combined with a map of the racetrack itself, which [Andy] cleverly builds by manually driving the car around the track while building the localization data. Once that is in place, the car has all it needs to autonomously zip around.

Interested in the nitty-gritty details? You’re in luck, because all of the math behind [Andy]’s algorithm is explained on the project page linked above, and the GitHub repository for [Andy]’s autonomous car has all the implementation details.

The system is location-dependent, but it works so well that [Andy] considers track localization a solved problem. Watch the system in action in the two videos embedded below.

Continue reading “Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road”

Oculus Go VR Headset Gets Root Access, No Jailbreak Needed

The Oculus Go, Facebook’s first generation standalone VR headset, hit the market back in 2018 but it’s taken until now for owners to get an official unlocked OS build. The release was hinted at by former Oculus CTO John Carmack in a recent Tweet as something he had been pushing for years. This opens the hardware completely, allowing root access without the need for an unofficial jailbreak.

Oculus Go headset [image: WikiMedia Commons]
The Oculus Go is Android-based and has specifications that are not exactly cutting edge by VR standards, especially since head tracking is limited to three degrees of freedom (DoF). This makes it best suited to seated applications like media consumption. That said, it’s still a remarkable amount of integrated hardware that can be available for a low price on the secondary market. Official support for the Go ended in December 2020, and the ability to completely unlock the device is a positive step towards rescuing the hardware from semi-hoarded tech junk piles where it might otherwise simply gather dust.

When phone-based VR went the way of the dodo, millions of empty headsets went obsolete with it for a variety of reasons, but at least this way perfectly-good (if dated) hardware might still get some use in clever projects. Credit where credit is due; opening up root access to old but still perfectly functional hardware is the right thing to do, and it’s nice to see it happening.

POLF: Retro 3D Game Uses Only A Character Display

Got a retrocomputing itch? So does [David Given], and luckily for us all he indulged it by writing POLF: a first-person 3D game for the Commodore PET that uses only the system’s 40×25 text mode character display for visuals. It’s a fantastic achievement, considering that the 80s-era computer boasts 32 kB of memory and doesn’t even have a graphical display.

Each level has an 8×8 layout.

Each level in POLF is a small, maze-like room in which one’s goal is to play a sort of cross between billiards and golf, aiming to move the round “ball” object into the square “hole” object. The 3D view is rendered using raycasting, which is a way of efficiently drawing a workable 3D perspective using limited resources. Raycasting can only do so much, but as a method it works fantastically within its limitations, and there are useful tutorials out there that lay the process bare.

The GitHub repository for the project is here, and it should run on any 40-column screen PET with at least 16 kB of RAM. Watch it in action in the video, embedded below. (Hint: the little bar graphs under the compass headings at the bottom of the screen represent the player’s proximity to the ball and hole objects. )

Continue reading “POLF: Retro 3D Game Uses Only A Character Display”