Computer Vision Extracts Lightning From Footage

Lightning is one of the more mysterious and fascinating phenomenon on the planet. Extremely powerful, but each strike on average only has enough energy to power an incandescent bulb for an hour. The exact mechanism that starts a lightning strike is still not well understood. Yet it happens 45 times per second somewhere on the planet. While we may not gain a deeper scientific appreciation of lightning anytime soon, but we can capture it in various photography thanks to this project which leverages computer vision machine learning to pull out the best frames of lightning.

The project’s creator, [Liam], built this as a tool for stormchasers and photographers so that they can film large amounts of time and not have to go back through their footage manually to pull out the frames with lightning strikes. The project borrows from a similar project, but this one adds Python 3 capabilities and runs on a tiny netbook for more easy field deployment. It uses OpenCV for object recognition, using video files as the source data, and features different modes to recognize different types of lightning.

The software is free and open source, and releases are supported for both Windows and Linux. So far, [Liam] has been able to capture all kinds of electrical atmospheric phenomenon with it including lightning, red sprites, and elves. We don’t see too many projects involving lightning around here, partly because humans can only generate a fraction of the voltage potential needed for the average lightning strike.

Food Irradiation Detector Doesn’t Use Banana For Scale

How do the potatoes in that sack keep from sprouting on their long trip from the field to the produce section? Why don’t the apples spoil? To an extent, the answer lies in varying amounts of irradiation. Though it sounds awful, irradiation reduces microbial contamination, which improves shelf life. Most people can choose to take it or leave it, but in some countries, they aren’t overly concerned about the irradiation dosages found in, say, animal feed. So where does that leave non-vegetarians?

If that line of thinking makes you want to Hulk out, you’re not alone. [kutluhan_aktar] decided to build an IoT food irradiation detector in an effort to help small businesses make educated choices about the feed they give to their animals. The device predicts irradiation dosage level using a combination of the food’s weight, color, and emitted ionizing radiation after being exposed to sunlight for an appreciable amount of time. Using this information, [kutluhan_aktar] trained a neural network running on a Beetle ESP32-C3 to detect the dosage and display relevant info on a transparent OLED screen. Primarily, the device predicts whether the dosage falls into the Regulated, Unsafe, or just plain Hazardous category.

[kutluhan_aktar] lets this baby loose on some uncooked pasta in the short demo video after the break. The macaroni is spread across a load cell to detect the weight, while [kutluhan_aktar] uses a handheld sensor to determine the color.

This isn’t the first time we’ve seen AI on the Hackaday menu. Remember when we tried those AI-created recipes?

Machine Learning Does Its Civic Duty By Spotting Roadside Litter

If there’s one thing that never seems to suffer from supply chain problems, it’s litter. It’s everywhere, easy to spot and — you’d think — pick up. Sadly, most of us seem to treat litter as somebody else’s problem, but with something like this machine vision litter mapper, you can at least be part of the solution.

For the civic-minded [Nathaniel Felleke], the litter problem in his native San Diego was getting to be too much. He reasoned that a map of where the trash is located could help municipal crews with cleanup, so he set about building a system to search for trash automatically. Using Edge Impulse and a collection of roadside images captured from a variety of sources, he built a model for recognizing trash. To find the garbage, a webcam with a car window mount captures images while driving, and a Raspberry Pi 4 runs the model and looks for garbage. When roadside litter is found, the Pi uses a Blues Wireless Notecard to send the GPS location of the rubbish to a cloud database via its cellular modem.

Cruising around the streets of San Diego, [Nathaniel]’s system builds up a database of garbage hotspots. From there, it’s pretty straightforward to pull the data and overlay it on Google Maps to create a heatmap of where the garbage lies. The video below shows his system in action.

Yes, driving around a personal vehicle specifically to spot litter is just adding more waste to the mix, but you’d imagine putting something like this on municipal vehicles that are already driving around cities anyway. Either way, we picked up some neat tips, especially those wireless IoT cards. We’ve seen them used before, but [Nathaniel]’s project gives us a path forward on some ideas we’ve had kicking around for a while.

Continue reading “Machine Learning Does Its Civic Duty By Spotting Roadside Litter”

The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.

Automatic Water Turret Keeps Grass Watered

Summer is rapidly approaching (at least for those of us living in the Northern Hemisphere) and if you are having to maintain a lawn at your home, now is the time to be thinking about irrigation. Plenty of people have built-in sprinkler systems to care for their turf, but this is little (if any) fun for any children that might like to play in those sprinklers. This sprinkler solves that problem, functioning as an automatic water gun turret for anyone passing by.

This project was less a specific sprinkler build and more of a way to reuse some Khadas VIM3 single-board computers that the project’s creator, [Neil], wanted to use for something other than mining crypto. The boards have a neural processing unit (NPU) in them which makes them ideal for computer vision projects like this. The camera input is fed into the NPU which then directs the turret to the correct position using yaw and pitch drivers. It’s built out of mostly aluminum extrusion and 3D printed parts, and the project’s page goes into great details about all of the parts needed if you are interested in replicating the build.

[Neil] is also actively working on improving the project, especially around the turret’s ability to identify and track objects using OpenCV. We certainly look forward to more versions of this build in the future, and in the meantime be sure to check out some other automated sprinkler builds we’ve seen which solve different problems.

Continue reading “Automatic Water Turret Keeps Grass Watered”

Point Out Pup’s Packages With This Poop-Shooting Laser

When you’re lucky enough to have a dog in your life, you tend to overlook some of the more one-sided aspects of the relationship. While you are severely restrained with regard to where you eliminate your waste, your furry friend is free to roam the yard and dispense his or her nuggets pretty much at will, and fully expect you to follow along on cleanup duty. See what we did there?

And so dog people sometimes rebel at this lopsided power structure, by leaving the cleanup till later — often much, much later, when locating the offending piles can be a bit difficult. So naturally, we now have this poop-shooting laser turret to helpfully guide you through your backyard cleanup sessions. It comes to us from [Caleb Olson], who leveraged his recent poop-posture monitor as the source of data for where exactly in the yard each deposit is located. To point them out, he attached a laser pointer to a cheap robot arm, and used OpenCV to help line up the bright green spot on each poop.

But wait, there’s more. [Caleb]’s code also optimizes his poop patrol route, minimizing the amount of pesky walking he has to do to visit each pile. And, the same pose estimation algorithm that watches the adorable [Twinkie] make her deposits keeps track of which ones [Caleb] stoops by, removing each from the worklist in turn. So now instead of having a dog control his life, he’s got a dog and a computer running the show. Perfect.

We joke, because poop, but really, this is a pretty neat exercise in machine learning. It does seem like the robot arm was bit overkill, though — we’d have thought a simple two-servo turret would have been pretty easy to whip up.

Continue reading “Point Out Pup’s Packages With This Poop-Shooting Laser”

TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers

The team from the Sensing, Interaction & Perception Lab at ETH Zürich, Switzerland have come up with TapType, an interesting text input method that relies purely on a pair of wrist-worn devices, that sense acceleration values when the wearer types on any old surface. By feeding the acceleration values from a pair of sensors on each wrist into a Bayesian inference classification type neural network which in turn feeds a traditional probabilistic language model (predictive text, to you and I) the resulting text can be input at up to 19 WPM with 0.6% average error. Expert TapTypers report speeds of up to 25 WPM, which could be quite usable.

Details are a little scarce (it is a research project, after all) but the actual hardware seems simple enough, based around the Dialog DA14695 which is a nice Cortex M33 based Bluetooth Low Energy SoC. This is an interesting device in its own right, containing a “sensor node controller” block, that is capable of handling sensor devices connected to its interfaces, independant from the main CPU. The sensor device used is the Bosch BMA456 3-axis accelerometer, which is notable for its low power consumption of a mere 150 μA.

User’s can “type” on any convenient surface.

The wristband units themselves appear to be a combination of a main PCB hosting the BLE chip and supporting circuit, connected to a flex PCB with a pair of the accelerometer devices at each end. The assembly was then slipped into a flexible wristband, likely constructed from 3D printed TPU, but we’re just guessing really, as the progression from the first embedded platform to the wearable prototype is unclear.

What is clear is that the wristband itself is just a dumb data-streaming device, and all the clever processing is performed on the connected device. Training of the system (and subsequent selection of the most accurate classifier architecture) was performed by recording volunteers “typing” on an A3 sized keyboard image, with finger movements tracked with a motion tracking camera, whilst recording the acceleration data streams from both wrists. There are a few more details in the published paper for those interested in digging into this research a little deeper.

The eagle-eyed may remember something similar from last year, from the same team, which correlated bone-conduction sensing with VR type hand tracking to generate input events inside a VR environment.

Continue reading “TapType: AI-Assisted Hand Motion Tracking Using Only Accelerometers”