Dog Poop Drone Cleans Up The Yard So You Don’t Have To

Sometimes you instantly know who’s behind a project from the subject matter alone. So when we saw this “aerial dog poop removal system” show up in the tips line, we knew it had to be the work of [Caleb Olson].

If you’re unfamiliar with [Caleb]’s oeuvre, let us refresh your memory. [Caleb] has been on a bit of a dog poop journey, starting with a machine-learning system that analyzed security camera footage to detect when the adorable [Twinkie] dropped a deuce in the yard. Not content with just knowing when a poop event has occurred, he automated the task of locating the packages with a poop-pointing robot laser. Removal of the poop remained a manual task, one which [Caleb] was keen to outsource, hence the current work.

The video below, from a lightning talk at a conference, is pretty much all we have to go on, and the quality is a bit potato-esque. And while [Caleb]’s PoopCopter is clearly still a prototype, it’s easy to get the gist. Combining data from the previous poop-adjacent efforts, [Caleb] has built a quadcopter that can (or will, someday) be guided to the approximate location of the offending package, home in on it using a downward-looking camera, and autonomously whisk it away.

The retrieval mechanism is the high point for us; rather than a complicated, servo-laden “sky scoop” or something similar, the drone has a bell-shaped container on its belly with a series of geared leaves on the open end. The leaves are open when the drone descends onto the payload, and then close as the drone does a quick rotation around the yaw axis. And, as [Caleb] gleefully notes, the leaves can also open in midair with a high-torque yaw move in the opposite direction; the potential for neighborly hijinx is staggering.

All jokes and puns aside, this looks fantastic, and we can’t wait for more information and a better video. And lest you think [Caleb] only works on “Number Two” problems, never fear — he’s also put considerable work into automating his offspring and taking the awkwardness out of social interactions.

Continue reading “Dog Poop Drone Cleans Up The Yard So You Don’t Have To”

Mothbox Watches Bugs, So You — Or Your Grad Students — Don’t Have To

To the extent that one has strong feelings about insects, they tend toward the extremes of a spectrum that runs from a complete fascination with their diversity and the specializations they’ve evolved to exploit unique and ultra-narrow ecological niches, and “Eww, ick! Kill it!” It’s pretty clear that [Dr. Andy Quitmeyer] and his team tend toward the former, and while they love their bugs, spending all night watching them is a tough enough gig that they came up with Mothbox, the automated insect monitor.

Insect censuses are valuable tools for assessing the state of an ecosystem, especially insects’ vast numbers, short lifespan, and proximity to the base of the food chain. Mothbox is designed to be deployed in insect-rich environments and automatically recognize and tally the moths it sees. It uses an Arducam and Raspberry Pi for image capture, plus an array of UV and visible LEDs, all in a weatherproof enclosure. The moths are attracted to the light and fly between the camera and a plain white background, where an image is captured. YOLO v8 locates all the moths in the image, crops them out, and sends them to BioCLIP, a vision model for organismal biology that appears similar to something we’ve seen before. The model automatically sorts the moths by taxonomic features and keeps a running tally of which species it sees.

Mothbox is open source and the site has a ton of build information if you’re keen to start bug hunting, plus plenty of pictures of actual deployments, which should serve as nightmare fuel to the insectophobes out there.

40,000 FPS Omega camera captures Olympic photo-finish

Olympic Sprint Decided By 40,000 FPS Photo Finish

Advanced technology played a crucial role in determining the winner of the men’s 100-meter final at the Paris 2024 Olympics. In a historically close race, American sprinter Noah Lyles narrowly edged out Jamaica’s Kishane Thompson by just five-thousandths of a second. The final decision relied on an image captured by an Omega photo finish camera that shoots an astonishing 40,000 frames per second.

This cutting-edge technology, originally reported by PetaPixel, ensured the accuracy of the result in a race where both athletes recorded a time of 9.78 seconds. If SmartThings’ shot pourer from the 2012 Olympics were still around, it could once again fulfill its intended role of celebrating US medals.

Omega, the Olympics’ official timekeeper for decades, has continually innovated to enhance performance measurement. The Omega Scan ‘O’ Vision Ultimate, the camera used for this photo finish, is a significant upgrade from its 10,000 frames per second predecessor. The new system captures four times as many frames per second and offers higher resolution, providing a detailed view of the moment each runner’s torso touches the finish line. This level of detail was crucial in determining that Lyles’ torso touched the line first, securing his gold medal.

This camera is part of Omega’s broader technological advancements for the Paris 2024 Olympics, which include advanced Computer Vision systems utilizing AI and high-definition cameras to track athletes in real-time. For a closer look at how technology decided this historic race, watch the video by Eurosport that captured the event.

Continue reading “Olympic Sprint Decided By 40,000 FPS Photo Finish”

Try Image Classification Running In Your Browser, Thanks To WebGPU

When something does zero-shot image classification, that means it’s able to make judgments about the contents of an image without the user needing to train the system beforehand on what to look for. Watch it in action with this online demo, which uses WebGPU to implement CLIP (Contrastive Language–Image Pre-training) running in one’s browser, using the input from an attached camera.

By giving the program some natural language visual concept labels (such as ‘person’ or ‘cat’) that fit a hypothetical template for the image content, the system will output — in real-time — its judgement on the appropriateness of such labels to what the camera sees. Again, all of this runs locally.

It’s maybe a little bit unintuitive, but what’s happening in the demo is that the system is deciding which of the user-provided labels (“a photo of a cat” vs “a photo of a bald man”, for example) is most appropriate to what the camera sees. The more a particular label is judged a good fit for the image, the higher the number beside it.

This kind of process benefits greatly from shoveling the hard parts of the computation onto compatible graphics cards, which is exactly what WebGPU provides by allowing the browser access to a local GPU. WebGPU is relatively recent, but we’ve already seen it used to run LLMs (Large Language Models) directly in the browser.

Wondering what makes GPUs so very useful for AI-type applications? It’s all about their ability to work with enormous amounts of data very quickly.

The Aimbot V3 Aims To Track & Terminate You

Some projects we cover are simple, while some descend into the sort of obsessive, rabbit-hole-digging-into-wonderland madness that hackers everywhere will recognize. That’s precisely where [Excessive Overload] has gone with the AimBot V3, a target-tracking BB-gun that uses three cameras, two industrial servos, and an indeterminate amount of computing power to track objects and fire up to 40 BB gun pellets a second at them.

The whole project is overkill, made of CNC-machined metal, epoxy-cast gears, and a chain-driven pan-tilt system that looks like it would take off a finger or two before you even get to the shooty bit. That’s driven by input from the three cameras: a wide-angle one that finds the target and a stereo pair that zooms in on the target and determines the distance from the gun, using several hundred frames per second of video. This is then used to aim the BB gun stock, a Polarstar mechanism that fires up to 40 pellets a second. That’s fed by a customized feeder that uses spring wire.

The whole thing comes together to form a huge gun that will automatically track the target. It even uses motion tracking to discern between a static object like a person and a dart fired by a toy gun, picking the dart out of the air at least some of the time.

The downside is that it only works on targets with a retroreflective patch: it includes a 15 watt IR LED on the front of the gun. The camera detects the bright reflection and uses it to track the target, so all you have to do to avoid this particular Terminator is make sure you aren’t wearing anything too shiny.

Continue reading “The Aimbot V3 Aims To Track & Terminate You”

Keeping Badgers At Bay With Tensorflow

Human-animal conflict is always a contentious issue, and finding ways to prevent damage without causing harm to the animals often requires creative solutions. [James Milward] needed a humane method to stop badgers and foxes from uprooting his garden, leading him to create the Furbinator 3000, a system that combines computer vision with audio deterrents..

[James] initially tried using scent repellents (which were ignored) and blocking access to his garden (resulting in more digging), but found some success with commercial ultrasonic audio repellent devices. However, these had to be manually turned off during the day to avoid annoying activation of the PIR motion sensors by [James] and his family, and the integrated solar panels couldn’t keep up with the load.

This presented a good opportunity to try his hand at practical machine vision. He already had a substantial number of sample images from the Ring cameras in his garden, which he turned into a functional TensorFlow Lite model with about 2.5 hours of training. He linked it with event-activated RTSP streams from his Ring cameras using the ring-mqtt library. To minimize false positives on stationary objects, he incorporated a motion filter into the processing pipeline. When it identifies a fox or badger with reasonable accuracy, it generates an MQTT event.

[James] modified the ultrasonic devices so they would react to these events using an ESP8266-based WeMos D1 Mini Pro development board and added an external 5 V power supply for sustained operation. All development was performed in a Docker container which simplified deployment on a Raspberry Pi 4.

After implementing the system, [James] woke up to the satisfying sight of his garden remaining untouched overnight, a victory that even earned him some coverage by the BBC.

Thanks for the tip [Laurent]!

Autonomous Wheelchair Lets Jetson Do The Driving

Compared to their manual counterparts, electric wheelchairs are far less demanding to operate, as the user doesn’t need to have upper body strength normally required to turn the wheels. But even a motorized wheelchair needs some kind of input from the user to control it, which still may pose a considerable challenge depending on the individual’s specific abilities.

Hoping to improve on the situation, [Kabilan KB] has developed a self-driving electric wheelchair that can navigate around obstacles by feeding the output of an Intel RealSense Depth Camera and LiDAR module into a Jetson Nano Developer Kit running OpenCV. To control the actual motors, the Jetson is connected to an Arduino which in turn is wired into a common L298N motor driver board.

As [Kabilan] explains on the NVIDIA Blog, he specifically chose off-the-shelf components and the most affordable electric wheelchair he could find to bring the total cost of the project as low as possible. An undergraduate from the Karunya Institute of Technology and Sciences in Coimbatore, India, he notes that this sort of assistive technology is usually only available to more affluent patients. With his cost-saving measures, he hopes to address that imbalance.

While automatic obstacle avoidance would already be a big help for many users, [Kabilan] imagines improved software taking things a step further. For example, a user could simply press a button to indicate which room of the house they want to move to, and the chair could drive itself there automatically. With increasingly powerful single-board computers and the state of open source self-driving technology steadily improving, it’s not hard to imagine a future where this kind of technology is commonplace.