Custom Drone Software Searches, Rescues

When a new technology first arrives in people’s hands, it often takes a bit of time before the full capabilities of that technology are realized. In much the same way that many early Internet users simply used it to replace snail mail, or early smartphones were used as more convenient methods for messaging and calling than their flip-phone cousins, autonomous drones also took a little bit of time before their capabilities became fully realized. While some initially used them as a drop-in replacement for things like aerial photography, a group of mountain rescue volunteers in the United Kingdom realized that they could be put to work in more efficient ways suited to their unique abilities and have been behind a bit of a revolution in the search-and-rescue community.

The first search-and-rescue groups using drones to help in their efforts generally used them to search in the same way a helicopter would have been used in the past, only with less expense. But the effort involved is still the same; a human still needed to do the searching themselves. The group in the UK devised an improved system to take the human effort out of the equation by sending a drone to fly autonomously over piece of mountainous terrain and take images of the ground in such a way that any one thing would be present in many individual images. From there, the drone would fly back to its base station where an operator could download the images and run them through a computer program which would analyse the images and look for outliers in the colors of the individual pixels. Generally, humans tend to stand out against their backgrounds in ways that computers are good at spotting while humans themselves might not notice at all, and in the group’s first efforts to locate a missing person they were able to locate them almost immediately using this technology.

Although the system is built on a mapping system somewhat unique to the UK, the group has not attempted to commercialize the system. MR Maps, the software underpinning this new feature, has been free to use for anyone who wants to use it. And for those just starting out in this field, it’s also worth pointing out that location services offered by modern technologies in rugged terrain like this can often be misleading, and won’t be as straightforward of a solution to the problem as one might think.

Drive For Show, Putt For Dough

Any golfer will attest that the most impressive looking part of the game—long drives—isn’t where the game is won. To really lower one’s handicap the most important skills to develop are in the short game, especially putting. Even a two-inch putt to close out a hole counts the same as the longest drive, so these skills are not only difficult to master but incredibly valuable. To shortcut some of the skill development, though, [Sparks and Code] broke most rules around the design of golf clubs to construct this robotic putter.

The putter’s goal is to help the golfer with some of the finesse required to master the short game. It can vary its striking force by using an electromagnet to lift the club face a certain amount, depending on the distance needed to sink a putt. Two servos lift the electromagnet and club, then when the appropriate height is reached the electromagnet turns off and the club swings down to strike the ball. The two servos can also oppose each other’s direction to help aim the ball as well, allowing the club to strike at an angle rather than straight on. The club also has built-in rangefinding and a computer vision system so it can identify the hole automatically and determine exactly how it should hit the ball. The only thing the user needs to do is press a button on the shaft of the club.

Even the most famous golfers will have problems putting from time to time so, if you’re willing to skirt the rules a bit, the club might be useful to have around. If not, it’s at least a fun project to show off on the golf course to build one’s credibility around other robotics enthusiasts who also happen to be golfers. If you’re looking for something to be more of a coach or aide rather than an outright cheat, though, this golf club helps analyze and perfect your swing instead of doing everything for you.

Continue reading “Drive For Show, Putt For Dough”

Dog Poop Drone Cleans Up The Yard So You Don’t Have To

Sometimes you instantly know who’s behind a project from the subject matter alone. So when we saw this “aerial dog poop removal system” show up in the tips line, we knew it had to be the work of [Caleb Olson].

If you’re unfamiliar with [Caleb]’s oeuvre, let us refresh your memory. [Caleb] has been on a bit of a dog poop journey, starting with a machine-learning system that analyzed security camera footage to detect when the adorable [Twinkie] dropped a deuce in the yard. Not content with just knowing when a poop event has occurred, he automated the task of locating the packages with a poop-pointing robot laser. Removal of the poop remained a manual task, one which [Caleb] was keen to outsource, hence the current work.

The video below, from a lightning talk at a conference, is pretty much all we have to go on, and the quality is a bit potato-esque. And while [Caleb]’s PoopCopter is clearly still a prototype, it’s easy to get the gist. Combining data from the previous poop-adjacent efforts, [Caleb] has built a quadcopter that can (or will, someday) be guided to the approximate location of the offending package, home in on it using a downward-looking camera, and autonomously whisk it away.

The retrieval mechanism is the high point for us; rather than a complicated, servo-laden “sky scoop” or something similar, the drone has a bell-shaped container on its belly with a series of geared leaves on the open end. The leaves are open when the drone descends onto the payload, and then close as the drone does a quick rotation around the yaw axis. And, as [Caleb] gleefully notes, the leaves can also open in midair with a high-torque yaw move in the opposite direction; the potential for neighborly hijinx is staggering.

All jokes and puns aside, this looks fantastic, and we can’t wait for more information and a better video. And lest you think [Caleb] only works on “Number Two” problems, never fear — he’s also put considerable work into automating his offspring and taking the awkwardness out of social interactions.

Continue reading “Dog Poop Drone Cleans Up The Yard So You Don’t Have To”

Mothbox Watches Bugs, So You — Or Your Grad Students — Don’t Have To

To the extent that one has strong feelings about insects, they tend toward the extremes of a spectrum that runs from a complete fascination with their diversity and the specializations they’ve evolved to exploit unique and ultra-narrow ecological niches, and “Eww, ick! Kill it!” It’s pretty clear that [Dr. Andy Quitmeyer] and his team tend toward the former, and while they love their bugs, spending all night watching them is a tough enough gig that they came up with Mothbox, the automated insect monitor.

Insect censuses are valuable tools for assessing the state of an ecosystem, especially insects’ vast numbers, short lifespan, and proximity to the base of the food chain. Mothbox is designed to be deployed in insect-rich environments and automatically recognize and tally the moths it sees. It uses an Arducam and Raspberry Pi for image capture, plus an array of UV and visible LEDs, all in a weatherproof enclosure. The moths are attracted to the light and fly between the camera and a plain white background, where an image is captured. YOLO v8 locates all the moths in the image, crops them out, and sends them to BioCLIP, a vision model for organismal biology that appears similar to something we’ve seen before. The model automatically sorts the moths by taxonomic features and keeps a running tally of which species it sees.

Mothbox is open source and the site has a ton of build information if you’re keen to start bug hunting, plus plenty of pictures of actual deployments, which should serve as nightmare fuel to the insectophobes out there.

40,000 FPS Omega camera captures Olympic photo-finish

Olympic Sprint Decided By 40,000 FPS Photo Finish

Advanced technology played a crucial role in determining the winner of the men’s 100-meter final at the Paris 2024 Olympics. In a historically close race, American sprinter Noah Lyles narrowly edged out Jamaica’s Kishane Thompson by just five-thousandths of a second. The final decision relied on an image captured by an Omega photo finish camera that shoots an astonishing 40,000 frames per second.

This cutting-edge technology, originally reported by PetaPixel, ensured the accuracy of the result in a race where both athletes recorded a time of 9.78 seconds. If SmartThings’ shot pourer from the 2012 Olympics were still around, it could once again fulfill its intended role of celebrating US medals.

Omega, the Olympics’ official timekeeper for decades, has continually innovated to enhance performance measurement. The Omega Scan ‘O’ Vision Ultimate, the camera used for this photo finish, is a significant upgrade from its 10,000 frames per second predecessor. The new system captures four times as many frames per second and offers higher resolution, providing a detailed view of the moment each runner’s torso touches the finish line. This level of detail was crucial in determining that Lyles’ torso touched the line first, securing his gold medal.

This camera is part of Omega’s broader technological advancements for the Paris 2024 Olympics, which include advanced Computer Vision systems utilizing AI and high-definition cameras to track athletes in real-time. For a closer look at how technology decided this historic race, watch the video by Eurosport that captured the event.

Continue reading “Olympic Sprint Decided By 40,000 FPS Photo Finish”

Try Image Classification Running In Your Browser, Thanks To WebGPU

When something does zero-shot image classification, that means it’s able to make judgments about the contents of an image without the user needing to train the system beforehand on what to look for. Watch it in action with this online demo, which uses WebGPU to implement CLIP (Contrastive Language–Image Pre-training) running in one’s browser, using the input from an attached camera.

By giving the program some natural language visual concept labels (such as ‘person’ or ‘cat’) that fit a hypothetical template for the image content, the system will output — in real-time — its judgement on the appropriateness of such labels to what the camera sees. Again, all of this runs locally.

It’s maybe a little bit unintuitive, but what’s happening in the demo is that the system is deciding which of the user-provided labels (“a photo of a cat” vs “a photo of a bald man”, for example) is most appropriate to what the camera sees. The more a particular label is judged a good fit for the image, the higher the number beside it.

This kind of process benefits greatly from shoveling the hard parts of the computation onto compatible graphics cards, which is exactly what WebGPU provides by allowing the browser access to a local GPU. WebGPU is relatively recent, but we’ve already seen it used to run LLMs (Large Language Models) directly in the browser.

Wondering what makes GPUs so very useful for AI-type applications? It’s all about their ability to work with enormous amounts of data very quickly.

The Aimbot V3 Aims To Track & Terminate You

Some projects we cover are simple, while some descend into the sort of obsessive, rabbit-hole-digging-into-wonderland madness that hackers everywhere will recognize. That’s precisely where [Excessive Overload] has gone with the AimBot V3, a target-tracking BB-gun that uses three cameras, two industrial servos, and an indeterminate amount of computing power to track objects and fire up to 40 BB gun pellets a second at them.

The whole project is overkill, made of CNC-machined metal, epoxy-cast gears, and a chain-driven pan-tilt system that looks like it would take off a finger or two before you even get to the shooty bit. That’s driven by input from the three cameras: a wide-angle one that finds the target and a stereo pair that zooms in on the target and determines the distance from the gun, using several hundred frames per second of video. This is then used to aim the BB gun stock, a Polarstar mechanism that fires up to 40 pellets a second. That’s fed by a customized feeder that uses spring wire.

The whole thing comes together to form a huge gun that will automatically track the target. It even uses motion tracking to discern between a static object like a person and a dart fired by a toy gun, picking the dart out of the air at least some of the time.

The downside is that it only works on targets with a retroreflective patch: it includes a 15 watt IR LED on the front of the gun. The camera detects the bright reflection and uses it to track the target, so all you have to do to avoid this particular Terminator is make sure you aren’t wearing anything too shiny.

Continue reading “The Aimbot V3 Aims To Track & Terminate You”