Render Yourself Invisible To AI With This Adversarial Sweater Of Doom

Ugly sweater season is rapidly approaching, at least here in the Northern Hemisphere. We’ve always been a bit baffled by the tradition of paying top dollar for a loud, obnoxious sweater that gets worn to exactly one social event a year. We don’t judge, of course, but that’s not to say we wouldn’t look a little more favorably on someone’s fashion choice if it were more like this AI-defeating adversarial ugly sweater.

The idea behind this research from the University of Maryland is not, of course, to inform fashion trends, nor is it to create a practical invisibility cloak. It’s really to probe machine learning systems for vulnerabilities by making small changes to the input while watching for changes in the output. In this case, the ML system was a YOLO-based vision system which has little trouble finding humans in an arbitrary image. The adversarial pattern was generated by using a large set of training images, some of which contain the objects of interest — in this case, humans. Each time a human is detected, a random pattern is rendered over the image, and the data is reassessed to see how much the pattern lowers the object’s score. The adversarial pattern eventually improves to the point where it mostly prevents humans from being recognized. Much more detail is available in the research paper (PDF) if you want to dig into the guts of this.

The pattern, which looks a little like a bad impressionist painting of people buying pumpkins at a market and bears some resemblance to one we’ve seen before in similar work, is said to work better from different viewing angles. It also makes a spiffy pullover, especially if you’d rather blend in at that Christmas party.

Laser Zaps Cockroaches Over One Meter

You may have missed this month’s issue of Oriental Insects, in which a project by [Ildar Rakhmatulin] Heriot-Watt University in Edinburgh caught our attention. [Ildar] led a team of researchers in the development of an AI-controlled laser that neutralizes moving cockroaches at distances of up to 1.2 meters. Noting the various problems using chemical pesticides for pest control, his team sought out a non-conventional approach.

The heart of the pest controller is a Jetson Nano, which uses OpenCV and Yolo object detection to find the cockroaches and galvanometers to steer the laser beam. Three different lasers were used for testing, allowing the team to evaluate a range of wavelengths, power levels, and spot sizes. Unsurprisingly, the higher power 1.6 W laser was most efficient and quicker.

The project is on GitHub (here) and the cockroach machine learning image set is available here. But [Ildar] points out in the conclusion of the report, this is dangerous. It’s suitable for academic research, but it’s not quite ready for general use, lacking any safety features. This report is full of cockroach trivia, such as the average speed of a cockroach is 4.8 km/h, and they run much faster when being zapped. If you want to experiment with cockroaches yourself, a link is provided to a pet store that sells the German Blattela germanica that was the target of this report.

If this project sounds familiar, it is because it is an improvement of a previous project we wrote about last year which used similar techniques to zap mosquitoes.

Continue reading “Laser Zaps Cockroaches Over One Meter”

Teaching A Machine To Be Worse At A Video Game Than You Are

Is it really cheating if the aimbot you’ve built plays the game worse than you do?

We vote no, and while we take a dim view on cheating in general, there are still some interesting hacks in this AI-powered bot for Valorant. This is a first-person shooter, team-based game that has a lot of action and a Counter-Strike vibe. As [River] points out, most cheat-bots have direct access to the memory of the computer which is playing the game, which gives it an unfair advantage over human players, who have to visually process the game field and make their moves in meatspace. To make the Valorant-bot more of a challenge, he decided to feed video of the game from one computer to another over an HDMI-to-USB capture device.

The second machine has a YOLOv5 model which was trained against two hours of gameplay, enough to identify friend from foe — most of the time. Navigation around the map was done by analyzing the game’s on-screen minimap with OpenCV and doing some rudimentary path-finding. Actually controlling the player on the game machine was particularly hacky; rather than rely on an API to send keyboard sequences, [River] used a wireless mouse dongle on the game machine and a USB transmitter on the second machine.

The results are — iffy, to say the least. The system tends to get the player stuck in corners, and doesn’t recognize enemies that pop up at close range. The former is a function of the low-res minimap, while the latter has to do with the training data set — most human players engage enemies at distance, so there’s a dearth of “bad breath range” encounters to train to. Still, we’re impressed that it’s possible to train a machine to play a complex FPS game at all, let alone this well.

Hackaday Links Column Banner

Hackaday Links: December 29, 2019

The retrocomputing crowd will go to great lengths to recreate the computers of yesteryear, and no matter which species of computer is being restored, getting it just right is a badge of honor in the community. The case and keyboard obviously playing a big part in that look, so when a crowdfunding campaign to create new keycaps for the C64 was announced, Commodore fans jumped to fund it. Sadly, more than four years later, the promised keycaps haven’t been delivered. One disappointed backer, Jim Drew, decided he was sick of waiting, so he delved into the world of keycaps injection molding and started his own competing campaign. Jim details his adventures in his Kickstarter Indiegogo campaign, which makes for good reading even if you’re not into Commodore refurbishment. Here’s hoping Jim has better luck than the competition did.

Looking for anonymity in our increasingly surveilled world? You’re not alone, and in fact, we predict facial recognition spoofing products and methods will be a growth industry in the new decade. Aside from the obvious – and often illegal – approach of wearing a mask that blocks most of the features machine learning algorithms use to quantify your face, one now has another option, in the form of a colorful pattern that makes you invisible to the YOLOv2 algorithm. The pattern, which looks like a soft-focus crowd scene rendered in Mardi Gras colors, won’t make the algorithm think you’re someone else, but it will prevent you from being classified as a person. It won’t work with any other AI algorithm, but it’s still an interesting phenomenon.

We saw a great hack come this week about using an RTL-SDR to track down a water leak. Clayton’s water bill suddenly skyrocketed, and he wanted to track down the source. Luckily, his water meter uses the encoder receive-transmit (ERT) protocol on the 900 MHz ISM band to report his usage, so he threw an SDR dongle and rtlamr at the problem. After logging his data, massaging it a bit with some Python code, and graphing water consumption over time, he found that water was being used even when nobody was home. That helped him find the culprit – leaky flap valves in the toilets resulting in a slow drip that ran up the bill. There were probably other ways to attack the problem, but we like this approach just fine.

Are your flex PCBs making you cry? Friend of Hackaday Drew Fustini sent us a tip on teardrop pads to reduce the mechanical stress on traces when the board flexes. The trouble is that KiCad can’t natively create teardrop pads. Thankfully an action plugin makes teardrops a snap. Drew goes into a bit of detail on how the plugin works and shows the results of some test PCBs he made with them. It’s a nice trick to keep in mind for your flexible design work.

Keep Pesky Cats At Bay With A Machine-Learning Turret Gun

It doesn’t take long after getting a cat in your life to learn who’s really in charge. Cats do pretty much what they want to do, when they want to do it, and for exactly as long as it suits them. Any correlation with your wants and needs is strictly coincidental, and subject to change without notice, because cats.

[Alvaro Ferrán Cifuentes] almost learned this the hard way, when his cat developed a habit of exploring the countertops in his kitchen and nearly turned on the cooktop while he was away. To modulate this behavior, [Alvaro] built this AI Nerf turret gun. The business end of the system is just a gun mounted on a pan-tilt base made from 3D-printed parts and a pair of hobby servos. A webcam rides atop the gun and feeds into a PC running software that implements the YOLO3 localization algorithm. The program finds the cat, tracks its centroid, and swivels the gun to match it. If the cat stays in the no-go zone above the countertop for three seconds, he gets a dart in his general direction. [Alvaro] found that the noise of the gun tracking him was enough to send the cat scampering, proving that cats are capable of learning as long as it suits them.

We like this build and appreciate any attempt to bring order to the chaos a cat can bring to a household. It also puts us in mind of [Matthias Wandel]’s recent attempt to keep warm in his shop, although his detection algorithm was much simpler.

Continue reading “Keep Pesky Cats At Bay With A Machine-Learning Turret Gun”

Let The Cards Fall Where They May, With A Robotic Rain Man

Finally,  a useful application for machine vision! Forget all that self-driving nonsense and facial recognition stuff – we’ve finally got an AI that can count cards at the blackjack table.

The system that [Edje Electronics] has built, dubbed “Rain Man 2.0” in homage to the classic title character created by [Dustin Hoffman] for the 1988 film, aims to tilt the odds at the blackjack table away from the house by counting cards. He explains one such strategy, a hi-low count, in the video below, which Rain Man 2.0 implements with the help of a webcam and YOLO for real-time object detection. Cards are detected in any orientation based on their suit and rank thanks to an extensive training set of card images, which [Edje] generated synthetically via some trickery with OpenCV. A script automated the process and yielded a rich training set of 50,000 images for YOLO. A Python program implements the trained model into a real-time card counting application.

Rain Man 2.0 is an improvement over [Edje]’s earlier Tensor Flow card counter, but it still has limitations. It can’t count into a six-deck shoe as the fictional [Rain Man] could, at least not yet. And even though cheater’s justice probably isn’t all cattle prods and hammers these days, the hardware needed for this hack is not likely to slip past casino security. So [Edje] has wisely limited its use to practicing his card counting skills. Eventually, he wants to turn Rain Man into a complete AI blackjack player, and explore its potential for other games and to help the visually impaired.

Continue reading “Let The Cards Fall Where They May, With A Robotic Rain Man”

Project Shows How To Use Machine Learning To Detect Pedestrians

Most people are familiar with the idea that machine learning can be used to detect things like objects or people, but for anyone who’s not clear on how that process actually works should check out [Kurokesu]’s example project for detecting pedestrians. It goes into detail on exactly what software is used, how it is configured, and how to train with a dataset.

The application uses a USB camera and the back end work is done with Darknet, which is an open source framework for neural networks. Running on that framework is the YOLO (You Only Look Once) real-time object detection system. To get useful results, the system must be trained on large amounts of sample data. [Kurokesu] explains that while pre-trained networks can be used, it is still necessary to fine-tune the system by adding a dataset which more closely models the intended application. Training is itself a bit of a balancing act. A system that has been overly trained on a model dataset (or trained on too small of a dataset) will suffer from overfitting, a condition in which the system ends up being too picky and unable to usefully generalize. In terms of pedestrian detection, this results in false negatives — pedestrians that don’t get flagged because the system has too strict of an idea about what a pedestrian should look like.

[Kurokesu]’s walkthrough on pedestrian detection is great, but for those interested in taking a step further back and rolling their own projects, this fork of Darknet contains YOLO for Linux and Windows and includes practical notes and guides on installing, using, and training from a more general perspective. Interested in learning more about machine learning basics? Don’t forget Google has a free online crash course to get you up to speed.