Keeping Badgers At Bay With Tensorflow

Human-animal conflict is always a contentious issue, and finding ways to prevent damage without causing harm to the animals often requires creative solutions. [James Milward] needed a humane method to stop badgers and foxes from uprooting his garden, leading him to create the Furbinator 3000, a system that combines computer vision with audio deterrents..

[James] initially tried using scent repellents (which were ignored) and blocking access to his garden (resulting in more digging), but found some success with commercial ultrasonic audio repellent devices. However, these had to be manually turned off during the day to avoid annoying activation of the PIR motion sensors by [James] and his family, and the integrated solar panels couldn’t keep up with the load.

This presented a good opportunity to try his hand at practical machine vision. He already had a substantial number of sample images from the Ring cameras in his garden, which he turned into a functional TensorFlow Lite model with about 2.5 hours of training. He linked it with event-activated RTSP streams from his Ring cameras using the ring-mqtt library. To minimize false positives on stationary objects, he incorporated a motion filter into the processing pipeline. When it identifies a fox or badger with reasonable accuracy, it generates an MQTT event.

[James] modified the ultrasonic devices so they would react to these events using an ESP8266-based WeMos D1 Mini Pro development board and added an external 5 V power supply for sustained operation. All development was performed in a Docker container which simplified deployment on a Raspberry Pi 4.

After implementing the system, [James] woke up to the satisfying sight of his garden remaining untouched overnight, a victory that even earned him some coverage by the BBC.

Thanks for the tip [Laurent]!

A wooden robot with a large fresnel lens in a sunny garden

Gardening Robot Uses Sunlight To Incinerate Weeds

Removing weeds is a chore few gardeners enjoy, as it typically involves long sessions of kneeling in the dirt and digging around for anything you don’t remember planting. Herbicides also work, but spraying poison all over your garden comes with its own problems. Luckily, there’s now a third option: [NathanBuildsDIY] designed and built a robot to help him get rid of unwanted plants without getting his hands dirty.

Constructed mostly from scrap pieces of wood and riding on a pair of old bicycle wheels, the robot has a pretty low-tech look to it. But it is in fact a very advanced piece of engineering that uses multiple sensors and actuators while running on a sophisticated software platform. The heart of the system is a Raspberry Pi, which drives a pair of DC motors to move the whole system along [Nathan]’s garden while scanning the ground below through a camera.

Machine vision software identifying a weed in a picture of garden soilThe Pi runs the camera’s pictures through a TensorFlow Lite model that can identify weeds. [Nathan] built this model himself by taking hundreds of pictures of his garden and manually sorting them into categories like “soil”, “plant” and “weed”. Once a weed has been detected, the robot proceeds to destroy it by concentrating sunlight onto it through a large Fresnel lens. The lens is mounted in a frame that can be moved in three dimensions through a set of servos. A movable lens cover turns the incinerator beam on or off.

Sunlight is focused onto the weed through a simple but clever two-step procedure. First, the rough position of the lens relative to the sun is adjusted with the help of a sun tracker made from four light sensors arranged around a cross-shaped cardboard structure. Then, the shadow cast by the lens cover onto the ground is observed by the Pi’s camera and the lens is focused by adjusting its position in such a way that the image formed by four holes in the lens cover ends up right on top of the target.

Once the focus is correct, the lens cover is removed and the weed is burned to a crisp by the concentrated sunlight. It’s pretty neat to see how well this works, although [Nathan] recommends you keep an eye on the robot while it’s working and don’t let it near any flammable materials. He describes the build process in full detail in his video (embedded below), hopefully enabling other gardeners to make their own, improved weed burner robots. Agricultural engineers have long been working on automatic weed removal, often using similar machine vision systems with various extermination methods like lasers or flamethrowers.

Continue reading “Gardening Robot Uses Sunlight To Incinerate Weeds”

Hackaday Prize 2023: EyeBREAK Could Be A Breakthrough

For those with strokes or other debilitating conditions, control over one’s eyelid can be one of the last remaining motor functions. Inspired by [Jeremiah Denton] blinking in Morse code on a televised interview, [MBW] designed an ESP32-based device to decode blinks into words.

While an ESP32 offers Bluetooth for simulating a keyboard and has a relatively low power draw, getting a proper blink detection system to run at 20 frames per second in a constrained environment is challenging. Earlier attempts used facial landmarks to try and determine, based on ratios, whether an eye was open or closed. A cascade detector combined with an XGBoost classifier offered excellent performance but struggled when the eye wasn’t centered. Ultimately a 50×50, 4-layer CNN in TensorFlow Lite processes the camera frames, producing a single output, eye open or closed. For debugging purposes, it streams camera frames over Wi-Fi with annotations via OpenCV, though getting OpenCV to compile for ESP32 was also nontrivial.

[MBW] trained the model using the MRL dataset and then quantized to int8. Getting the Bluetooth and Wi-Fi stacks to run concurrently was a bit of a pain, as was managing RAM. After exhausting SRAM and IRAM, [MBW] had to move to PRAM. The entire system is built into some lightweight goggles and makes for a fairly comfortable experience.

While TensorFlow and microcontrollers might seem like a bit of an odd couple, at the end of the day, the inference engine is just doing some math on an array of inputs with some weights. We’ve even seen TensorFlow Lite on a Commodore 64. If you don’t know about [Admiral Jerimiah Denton] we can shed some light on it for you.

Continue reading “Hackaday Prize 2023: EyeBREAK Could Be A Breakthrough”

How To Roll Your Own Custom Object Detection Neural Network

Real-time object detection, which uses neural networks and deep learning to rapidly identify and tag objects of interest in a video feed, is a handy feature with great hacker potential. Happily, it’s also possible to make customized CNNs (convolutional neural networks) tailored for one’s own needs, and that process just got easier thanks to some new documentation for the Vizy “AI camera” by Charmed Labs.

Raspberry Pi-based Vizy camera

Charmed Labs has been making hacker-friendly machine vision devices for a long time, and the Vizy camera impressed us mightily when we checked it out last year. Out of the box, Vizy has a perfectly functional object detector application that runs locally on the device, and can detect and tag many common everyday objects in real time. But what if that default application doesn’t quite meet one’s project needs? Good news, because it’s possible to create a custom-trained CNN, and that process got a lot more accessible thanks to step-by-step examples of training a model to recognize hands doing rock-paper-scissors.

Person and cat with machine-generated tags identifying them
Default object detection works well, but sometimes one needs custom results.

The basic process is this: Start with a variety of images that show the item of interest. Then identify and label the item of interest in each photo. These photos (a “training set”) are then sent to Google Colab, which will be used to generate a neural network. The resulting CNN model can then be downloaded and used, to see how well it performs.

Of course things rarely work perfectly the first time around, so at this point it’s pretty common for some refinement to be needed to increase accuracy. Luckily there are a number of tools to help do this without creating a new model from scratch, so it’s just a matter of tweaking until things perform acceptably.

Google Colab is free and the resulting CNNs are implemented in the TensorFlow Lite framework, meaning it’s possible to use them elsewhere. So if custom object detection has been holding up a project idea of yours, this might be what gets you over that hump.

Machine Learning Gives Cats One More Way To Control Their Humans

For those who choose to let their cats live a more or less free-range life, there are usually two choices. One, you can adopt the role of servant and run for the door whenever the cat wants to get back inside from their latest bird-murdering jaunt. Or two, install a cat door and let them come and go as they please, sometimes with a “present” for you in their mouth. Heads you win, tails you lose.

There’s another way, though: just let the cat ask to be let back in. That’s the approach that [Tennis Smith] took with this machine-learning kitty doorbell. It’s based on a Raspberry Pi 4, which lives inside the house, and a USB microphone that’s outside the front door. The Pi uses Tensorflow Lite to classify the sounds it picks up outside, and when one of those sounds fits the model of a cat’s meow, a message is dispatched to AWS Lambda. From there a text message is sent to alert [Tennis] that the cat is ready to come back in.

There’s a ton of useful information included in the repo for this project, including step-by-step instructions for getting Amazon Web Services working on the Pi. If you’re a dog person, fear not: changing from meows to barks is as simple as tweaking a single line of code. And if you’d rather not be at the beck and call of a cat but still want to avoid the evidence of a prey event on your carpet, machine learning can help with that too.

[via Tom’s Hardware]

TensorFlow Lite – On A Commodore 64

TensorFlow is a machine learning and AI library that has enabled so much and brought AI within the reach of most developers. But it’s fair to say that it’s not for the less powerful computers. For them there’s TensorFlow Lite, in which a model is created on a larger machine and exported to a microcontroller or similarly resource-constrained one. [Nick Bild] has probably taken this to its extreme though, by achieving this feat on a Commodore 64. Not just that, but he’s also done it using Commodore BASIC.

TensorFlow Lite works by the model being created as a C array which is then parsed and run by an interpreter on the microcontroller. This is a little beyond the capabilities of the mighty 64, so he has instead created a Python script that does the job of the interpreter and produces Commodore BASIC code that can run on the 64. The trusty Commodore was one of the more powerful home computers of its day, but we’re fairly certain that its designers never in their wildest dreams expected it to be capable of this!

If you’re interested to know more about TensorFlow Lite, we’ve covered it in the past.

Header: MOS6502, CC BY-SA 3.0.

Edging Ahead When Learning On The Edge

“With the power of edge AI in the palm of your hand, your business will be unstoppable.

That’s what the marketing seems to read like for artificial intelligence companies. Everyone seems to have cloud-scale AI-powered business intelligence analytics at the edge. While sounding impressive, we’re not convinced that marketing mumbo jumbo means anything. But what does AI on edge devices look like these days?

Being on the edge just means that the actual AI evaluation and maybe even fine-tuning runs locally on a user’s device rather than in some cloud environment. This is a double win, both for the business and for the user. Privacy can more easily be preserved as less information is transmitted back to a central location. Additionally, the AI can work in scenarios where a server somewhere might not be accessible or provide a response quickly enough.

Google and Apple have their own AI libraries, ML Kit and Core ML, respectively. There are tools to convert Tensorflow, PyTorch, XGBoost, and LibSVM models into formats that CoreML and ML Kit understand. But other solutions try to provide a platform-agnostic layer for training and evaluation. We’ve also previously covered Tensorflow Lite (TFL), a trimmed-down version of Tensorflow, which has matured considerably since 2017.

For this article, we’ll be looking at PyTorch Live (PTL), a slimmed-down framework for adding PyTorch models to smartphones. Unlike TFL (which can run on RPi and in a browser), PTL is focused entirely on Android and iOS and offers tight integration. It uses a react-native backed environment which means that it is heavily geared towards the node.js world.

Continue reading “Edging Ahead When Learning On The Edge”