Soil Sensor Shows Flip-Dots Aren’t Just For Signs

Soil sensors are handy things, but while sensing moisture is what they do, how they handle that data is what makes them useful. Ensuring usefulness is what led [Maakbaas] to design and create an ESP32-based soil moisture sensor with wireless connectivity, deep sleep, data logging, and the ability to indicate that the host plant needs watering both visually, and with a push notification to a mobile phone.

A small flip-dot indicator makes a nifty one-dot display that requires no power when idle.

The visual notification part is pretty nifty, because [Maakbaas] uses a small flip-dot indicator made by Alfa-Zeta. This electromechanical indicator works by using two small coils to flip a colored disk between red or green. It uses no power when idle, which is a useful feature for a device that spends most of its time in a power-saving deep sleep. When all is well the indicator is green, but when the plant needs water, the indicator flips to red.

The sensor itself wakes itself up once per hour to take a sensor measurement, which it then stores in a local buffer for uploading to a database every 24 measurements. This reduces the number of times the device needs to power up and connect via WiFi, but if the sensor ever determines that the plant requires water, that gets handled immediately.

The sensor looks great, and a 3D-printed enclosure helps keep it clean while giving the device a bit of personality. Interested in rolling your own sensor? The project also has a page on Hackaday.io and we’ve previously covered in-depth details about how these devices work. Whether you are designing your own solution or using existing hardware, just remember to stay away from cheap probes that aren’t worth their weight in potting soil.

Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders

The eyes are windows into the mind, and this research into what jumping spiders look at and why required a clever device that performs eye tracking, but for jumping spiders. The eyesight of these fascinating creatures in some ways has a lot in common with humans. We both perceive a wide-angle region of lower visual fidelity, but are capable of directing our attention to areas of interest within that to see greater detail. Researchers have been able to perform eye-tracking on jumping spiders, literally showing exactly where they are looking in real-time, with the help of a custom device that works a little bit like a miniature movie theatre.

A harmless temporary adhesive on top (and a foam ball for a perch) holds a spider in front of a micro movie projector and IR camera. Spiders were not harmed in the research.

To do this, researchers had to get clever. The unblinking lenses of a spider’s two front-facing primary eyes do not move. Instead, to look at different things, the cone-shaped inside of the eye is shifted around by muscles. This effectively pulls the retina around to point towards different areas of interest. Spiders, whose primary eyes have boomerang-shaped retinas, have an X-shaped region of higher-resolution vision that the spider directs as needed.

So how does the spider eye tracker work? The spider perches on a tiny foam ball and is attached — the help of a harmless and temporary adhesive based on beeswax — to a small bristle. In this way, the spider is held stably in front of a video screen without otherwise being restrained. The spider is shown home movies while an IR camera picks up the reflection of IR off the retinas inside the spider’s two primary eyes. By superimposing the IR reflection onto the displayed video, it becomes possible to literally see exactly where the spider is looking at any given moment. This is similar in some ways to how eye tracking is done for humans, which also uses IR, but watches the position of the pupil.

In the short video embedded below, if you look closely you can see the two retinas make an X-shape of a faintly lighter color than the rest of the background. Watch the spider find and focus on the silhouette of a tasty cricket, but when a dark oval appears and grows larger (as it would look if it were getting closer) the spider’s gaze quickly snaps over to the potential threat.

Feel a need to know more about jumping spiders? This eye-tracking research was featured as part of a larger Science News article highlighting the deep sensory spectrum these fascinating creatures inhabit, most of which is completely inaccessible to humans.

Continue reading “Eye-Tracking Device Is A Tiny Movie Theatre For Jumping Spiders”

RC car without a top, showing electronics inside.

Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road

[Andy]’s robot is an autonomous RC car, and he shares the localization algorithm he developed to help the car keep track of itself while it zips crazily around an indoor racetrack. Since a robot like this is perfectly capable of driving faster than it can sense, his localization method is the secret to pouring on additional speed without worrying about the car losing itself.

The regular pattern of ceiling lights makes a good foundation for the system to localize itself.

To pull this off, [Andy] uses a camera with a fisheye lens aimed up towards the ceiling, and the video is processed on a Raspberry Pi 3. His implementation is slick enough that it only takes about 1 millisecond to do a localization update, netting a precision on the order of a few centimeters. It’s sort of like a fast indoor GPS, using math to infer position based on the movement of ceiling lights.

To be useful for racing, this localization method needs to be combined with a map of the racetrack itself, which [Andy] cleverly builds by manually driving the car around the track while building the localization data. Once that is in place, the car has all it needs to autonomously zip around.

Interested in the nitty-gritty details? You’re in luck, because all of the math behind [Andy]’s algorithm is explained on the project page linked above, and the GitHub repository for [Andy]’s autonomous car has all the implementation details.

The system is location-dependent, but it works so well that [Andy] considers track localization a solved problem. Watch the system in action in the two videos embedded below.

Continue reading “Fast Indoor Robot Watches Ceiling Lights, Instead Of The Road”

Oculus Go VR Headset Gets Root Access, No Jailbreak Needed

The Oculus Go, Facebook’s first generation standalone VR headset, hit the market back in 2018 but it’s taken until now for owners to get an official unlocked OS build. The release was hinted at by former Oculus CTO John Carmack in a recent Tweet as something he had been pushing for years. This opens the hardware completely, allowing root access without the need for an unofficial jailbreak.

Oculus Go headset [image: WikiMedia Commons]
The Oculus Go is Android-based and has specifications that are not exactly cutting edge by VR standards, especially since head tracking is limited to three degrees of freedom (DoF). This makes it best suited to seated applications like media consumption. That said, it’s still a remarkable amount of integrated hardware that can be available for a low price on the secondary market. Official support for the Go ended in December 2020, and the ability to completely unlock the device is a positive step towards rescuing the hardware from semi-hoarded tech junk piles where it might otherwise simply gather dust.

When phone-based VR went the way of the dodo, millions of empty headsets went obsolete with it for a variety of reasons, but at least this way perfectly-good (if dated) hardware might still get some use in clever projects. Credit where credit is due; opening up root access to old but still perfectly functional hardware is the right thing to do, and it’s nice to see it happening.

POLF: Retro 3D Game Uses Only A Character Display

Got a retrocomputing itch? So does [David Given], and luckily for us all he indulged it by writing POLF: a first-person 3D game for the Commodore PET that uses only the system’s 40×25 text mode character display for visuals. It’s a fantastic achievement, considering that the 80s-era computer boasts 32 kB of memory and doesn’t even have a graphical display.

Each level has an 8×8 layout.

Each level in POLF is a small, maze-like room in which one’s goal is to play a sort of cross between billiards and golf, aiming to move the round “ball” object into the square “hole” object. The 3D view is rendered using raycasting, which is a way of efficiently drawing a workable 3D perspective using limited resources. Raycasting can only do so much, but as a method it works fantastically within its limitations, and there are useful tutorials out there that lay the process bare.

The GitHub repository for the project is here, and it should run on any 40-column screen PET with at least 16 kB of RAM. Watch it in action in the video, embedded below. (Hint: the little bar graphs under the compass headings at the bottom of the screen represent the player’s proximity to the ball and hole objects. )

Continue reading “POLF: Retro 3D Game Uses Only A Character Display”

OAK-D Depth Sensing AI Camera Gets Smaller And Lighter

The OAK-D is an open-source, full-color depth sensing camera with embedded AI capabilities, and there is now a crowdfunding campaign for a newer, lighter version called the OAK-D Lite. The new model does everything the previous one could do, combining machine vision with stereo depth sensing and an ability to run highly complex image processing tasks all on-board, freeing the host from any of the overhead involved.

Animated face with small blue dots as 3D feature markers.
An example of real-time feature tracking, now in 3D thanks to integrated depth sensing.

The OAK-D Lite camera is actually several elements together in one package: a full-color 4K camera, two greyscale cameras for stereo depth sensing, and onboard AI machine vision processing with Intel’s Movidius Myriad X processor. Tying it all together is an open-source software platform called DepthAI that wraps the camera’s functions and capabilities together into a unified whole.

The goal is to give embedded systems access to human-like visual perception in real-time, which at its core means detecting things, and identifying where they are in physical space. It does this with a combination of traditional machine vision functions (like edge detection and perspective correction), depth sensing, and the ability to plug in pre-trained convolutional neural network (CNN) models for complex tasks like object classification, pose estimation, or hand tracking in real-time.

So how is it used? Practically speaking, the OAK-D Lite is a USB device intended to be plugged into a host (running any OS), and the team has put a lot of work into making it as easy as possible. With the help of a downloadable application, the hardware can be up and running with examples in about half a minute. Integrating the device into other projects or products can be done in Python with the help of the DepthAI SDK, which provides functionality with minimal coding and configuration (and for more advanced users, there is also a full API for low-level access). Since the vision processing is all done on-board, even a Raspberry Pi Zero can be used effectively as a host.

There’s one more thing that improves the ease-of-use situation, and that’s the fact that support for the OAK-D Lite (as well as the previous OAK-D) has been added to a software suite called the Cortic Edge Platform (CEP). CEP is a block-based visual coding system that runs on a Raspberry Pi, and is aimed at anyone who wants to rapidly prototype with AI tools in a primarily visual interface, providing yet another way to glue a project together.

Earlier this year we saw the OAK-D used in a system to visually identify weeds and estimate biomass in agriculture, and it’s exciting to see a new model being released. If you’re interested, the OAK-D Lite is available at a considerable discount during the Kickstarter campaign.

Several shirts side by side, each with a custom design

3D Print A Custom T-Shirt Design, Step-by-Step

Want to make a t-shirt with a custom design printed on it? It’s possible to use a 3D printer, and Prusa Research have a well-documented blog post and video detailing two different ways to use 3D printing to create colorful t-shirt designs. One method uses a thin 3D print as an iron-on, the other prints directly onto the fabric. It turns out that a very thin PLA print makes a dandy iron-on that can survive a few washes before peeling, but printing flexible filament directly onto the fabric — while more complicated — yields a much more permanent result. Not sure how to turn a graphic into a 3D printable model in the first place? No problem, they cover that as well.

Making an iron-on is fairly straightforward, and the method can be adapted to just about any printer type. One simply secures a sheet of baking paper (better known as parchment paper in North America) to the print bed with some binder clips, then applies glue stick so that the print can adhere. A one- or two-layer thick 3D print will stick to the sheet, which can then be laid print-side down onto a t-shirt and transferred to the fabric by ironing it at maximum temperature. PLA seems to work best for iron-ons, as it preserves details better. The results look good, and the method is fairly simple.

Direct printing to the fabric with flexible filament can yield much better (and more permanent) results, but the process is more involved and requires 3D printing a raised bed adapter for a Prusa printer, and fiddling quite a few print settings. But the results speak for themselves: printed designs look sharp and won’t come loose even after multiple washings. So be certain to have a few old shirts around for practice, because mistakes can’t be undone.

That 3D printers can be used to embed designs directly onto fabric is something many have known for years, but it’s always nice to see a process not just demonstrated as a concept, but documented as a step-by-step workflow. A video demonstration of everything, from turning a graphic into a 3D model to printing on a t-shirt with both methods is all in the short video embedded below, so give it a watch.

Continue reading “3D Print A Custom T-Shirt Design, Step-by-Step”