This $0 Filament Drybox Needs Nearly No Parts

All 3D printer filament benefits from being kept as dry as possible, but some are more sensitive to humidity than others. The best solution is a drybox; a sealed filament container, usually with some desiccant inside. But in a pinch, [Spacefan]’s quick and dirty $0 drybox solution is at least inspiring in terms of simplicity.

The only added part is this 3D-printed fitting.

[Spacefan]’s solution uses a filament roll’s own packing materials and a single 3D-printed part to create a sealed environment for a single roll. The roll lives inside a plastic bag (potentially the same one it was sealed in) and filament exits through a small hole and 3D-printed fitting that also uses a bit of spare PTFE tubing. The box doubles as a convenient container for it all. It doesn’t have as much to offer as this other DIY drybox solution, but sure is simple.

While we appreciate the idea, this design is sure to put a lot of friction on the spool itself. It will be a lot of extra work to pull filament off the spool, which needs to turn inside a bag, inside a box, and that extra work will be done by the 3D printer’s extruder, a part that should ideally be working as little as possible. The re-use of materials is a great idea, but it does look to us like the idea could use some improvement.

What do you think? Useful in a pinch, or needs changes? Would adding a spindle to support the spool help? Let us know what you think in the comments.

Hardware Project Becomes Successful Product For Solo Developer

[Michael Lynch] has been a solo developer for over three years now, and has been carefully cataloguing his attempts at generating revenue for himself ever since making the jump to being self-employed. Success is not just hard work; it is partly knowing when the pull the plug on an idea, and [Micheal] has been very open about his adventures in this area. He shares the good news about a DIY project of his that ended up becoming a successful product, complete with dollar amounts and frank observations.

About a year ago, we covered a project he shared called TinyPilot, which is an effective KVM-over-IP device, accessible over the web, that could be built with about $100 worth of parts. [Micheal] found it to be a fun and useful project, and decided to see if he could sell kits. However, he admits he didn’t have high expectations, and his thoughts are probably pretty familiar to most hardware types:

I questioned whether there was a market for this. Why would anyone buy this device from me? It was just a collection of widely available hardware components.

Well, it turns out that he was onto something, and the demand for his device became immediately clear. He’s since given TinyPilot more features, an attractive case, and even provides a support plan for commercial customers. This is an excellent reminder that sometimes, what is being sold isn’t the collection of parts itself. Sometimes, what’s being sold is a solution to a problem people have, and those people are time-poor and willing to pay for something that just works.

It’s great to see [Michael] find some success as a solo developer, but his yearly wrap-up covers much more than just the success of TinyPilot as a product, so be sure to check it out if you’re at all interested in the journey of working for yourself.

An Emulator For OBP, The Spaceflight Computer From The 1960s

[David Given] frequently dives into retrocomputing, and we don’t just mean he refurbishes old computers. We mean things like creating a simulator and assembler for the OBP spaceflight computer, which was used in the OAO-3 Copernicus space telescope, pictured above. Far from being a niche and forgotten piece of technology, the On-Board Processor (OBP) was used in several spacecraft and succeeded by the Advanced On-board Processor (AOP), which in turn led to the NASA Standard Spaceflight Computer (NSSC-1), used in the Hubble Space Telescope. The OBP was also created entirely from NOR gates, which is pretty neat.

One thing [David] learned in the process is that while this vintage piece of design has its idiosyncrasies, in general, the architecture has many useful features and is pleasant to work with. It is a bit slow, however. It runs at a mere 250 kHz and many instructions take several cycles to complete.

Sample of the natural-language-looking programming syntax for the assembler. (Example from page 68 of the instruction set manual for the OBP.)

One curious thing about the original assembler was documentation showing it was intended to be programmed in a natural-language-looking syntax, of which an example is shown here. To process this, the assembler simply mapped key phrases to specific assembly instructions. As [David] points out, this is an idea that seems to come and go (and indeed the OBP’s successor AOP makes no mention whatsoever of it, so clearly it “went”.) Since a programmer must adhere to a very rigid syntax and structure anyway to make anything work, one might as well just skip dealing with it and write assembly instructions directly, which at least have the benefit of being utterly unambiguous.

We’re not sure who’s up to this level of detail, but embedded below is a video of [David] coding the assembler and OBP emulator, just in case anyone has both an insatiable vintage thirst and a spare eight-and-a-half hours. If you’d prefer just the files, check out the project’s GitHub repository.

Continue reading “An Emulator For OBP, The Spaceflight Computer From The 1960s”

Extracting Data From Smart Scale Gives Rube Goldberg A Run For His Money

[Kevin Norman] got himself a smart body scale with the intention of logging data for his own analysis, but discovered that extracting data from the device was anything but easy. It turns out that the only way to access data from his scale is by viewing it in a mobile app. Screen-scraping is a time-honored method of pulling data from uncooperative systems, so [Kevin] committed to regularly taking a full-height screenshot from the app and using optical character recognition (OCR) to get the numbers, but making that work was a surprisingly long process full of dead ends.

First of all, while OCR can be reliable, it needs the right conditions. One thing that ended up being a big problem was the way the app appends units (kg, %) after the numbers. Not only are they tucked in very close, but they’re about half the height of the numbers themselves. It turns out that mixing and matching character height, in addition to snugging them up against one another, is something tailor-made to give OCR reliability problems.

The solution for this particular issue came from an unexpected angle. [Kevin] was using an open-source OCR program called Tesseract, and joined an IRC community #tesseract to ask for advice after exhausting his own options. The bemused members of the online community informed [Kevin] that they had nothing to do with OCR; #tesseract was actually a community for an open-source 3D FPS shooter of the same name. But as luck would have it, one of the members actually had OCR experience and suggested the winning approach: pre-process the image with OpenCV, using cv2.findContours() to detect and create a bounding box around each element. If an element is taller than a decimal point but shorter than everything else, throw it out. With that done, there were still a few more tweaks required, but the finish line was finally in sight.

Now [Kevin] can use the scale in the morning, take a screenshot, and in less than half a minute the results are imported into a database and visualizations generated. The resulting workflow might look like something Rube Goldberg would approve of, but it works!

Raspberry Pi Reads What It Sees, Delights Children

[Geyes30]’s Raspberry Pi project does one thing: it finds arbitrary text in the camera’s view and reads it out loud. Does it do so flawlessly? Not really. Was it at least effortless to put together? Also no, but it does wonderfully illustrate the process of gluing together different bits of functionality to make something new. Also, [geyes30]’s kids find it fascinating, and that’s a win all on its own.

The device is made from a Raspberry Pi and camera and works by sending a still image from the camera to an optical character recognition (OCR) program, which converts any visible text in the image to its ASCII representation. The recognized text is then piped to the espeak engine and spoken aloud. Getting all the tools to play nicely took a bit of work, but [geyes30] documented everything so well that even a novice should be able to get the project up and running in an afternoon.

Sometimes a function like text-to-speech is an end result in and of itself. This was also true of another similar project: Magic Mirror, whose purpose was to tirelessly indulge children’s curiosity about language.

Seeing other projects come to life and learning about new tools is a great way to get new ideas, and documenting them helps cross-pollinate among creative types. Did something inspire you recently, or have you documented your own project? We want to hear about it and so do others, so let us know via the tips line!

Continue reading “Raspberry Pi Reads What It Sees, Delights Children”

Soil Sensor Shows Flip-Dots Aren’t Just For Signs

Soil sensors are handy things, but while sensing moisture is what they do, how they handle that data is what makes them useful. Ensuring usefulness is what led [Maakbaas] to design and create an ESP32-based soil moisture sensor with wireless connectivity, deep sleep, data logging, and the ability to indicate that the host plant needs watering both visually, and with a push notification to a mobile phone.

A small flip-dot indicator makes a nifty one-dot display that requires no power when idle.

The visual notification part is pretty nifty, because [Maakbaas] uses a small flip-dot indicator made by Alfa-Zeta. This electromechanical indicator works by using two small coils to flip a colored disk between red or green. It uses no power when idle, which is a useful feature for a device that spends most of its time in a power-saving deep sleep. When all is well the indicator is green, but when the plant needs water, the indicator flips to red.

The sensor itself wakes itself up once per hour to take a sensor measurement, which it then stores in a local buffer for uploading to a database every 24 measurements. This reduces the number of times the device needs to power up and connect via WiFi, but if the sensor ever determines that the plant requires water, that gets handled immediately.

The sensor looks great, and a 3D-printed enclosure helps keep it clean while giving the device a bit of personality. Interested in rolling your own sensor? The project also has a page on Hackaday.io and we’ve previously covered in-depth details about how these devices work. Whether you are designing your own solution or using existing hardware, just remember to stay away from cheap probes that aren’t worth their weight in potting soil.

Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders

The eyes are windows into the mind, and this research into what jumping spiders look at and why required a clever device that performs eye tracking, but for jumping spiders. The eyesight of these fascinating creatures in some ways has a lot in common with humans. We both perceive a wide-angle region of lower visual fidelity, but are capable of directing our attention to areas of interest within that to see greater detail. Researchers have been able to perform eye-tracking on jumping spiders, literally showing exactly where they are looking in real-time, with the help of a custom device that works a little bit like a miniature movie theatre.

A harmless temporary adhesive on top (and a foam ball for a perch) holds a spider in front of a micro movie projector and IR camera. Spiders were not harmed in the research.

To do this, researchers had to get clever. The unblinking lenses of a spider’s two front-facing primary eyes do not move. Instead, to look at different things, the cone-shaped inside of the eye is shifted around by muscles. This effectively pulls the retina around to point towards different areas of interest. Spiders, whose primary eyes have boomerang-shaped retinas, have an X-shaped region of higher-resolution vision that the spider directs as needed.

So how does the spider eye tracker work? The spider perches on a tiny foam ball and is attached — the help of a harmless and temporary adhesive based on beeswax — to a small bristle. In this way, the spider is held stably in front of a video screen without otherwise being restrained. The spider is shown home movies while an IR camera picks up the reflection of IR off the retinas inside the spider’s two primary eyes. By superimposing the IR reflection onto the displayed video, it becomes possible to literally see exactly where the spider is looking at any given moment. This is similar in some ways to how eye tracking is done for humans, which also uses IR, but watches the position of the pupil.

In the short video embedded below, if you look closely you can see the two retinas make an X-shape of a faintly lighter color than the rest of the background. Watch the spider find and focus on the silhouette of a tasty cricket, but when a dark oval appears and grows larger (as it would look if it were getting closer) the spider’s gaze quickly snaps over to the potential threat.

Feel a need to know more about jumping spiders? This eye-tracking research was featured as part of a larger Science News article highlighting the deep sensory spectrum these fascinating creatures inhabit, most of which is completely inaccessible to humans.

Continue reading “Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders”