Keep Pesky Cats At Bay With A Machine-Learning Turret Gun

It doesn’t take long after getting a cat in your life to learn who’s really in charge. Cats do pretty much what they want to do, when they want to do it, and for exactly as long as it suits them. Any correlation with your wants and needs is strictly coincidental, and subject to change without notice, because cats.

[Alvaro Ferrán Cifuentes] almost learned this the hard way, when his cat developed a habit of exploring the countertops in his kitchen and nearly turned on the cooktop while he was away. To modulate this behavior, [Alvaro] built this AI Nerf turret gun. The business end of the system is just a gun mounted on a pan-tilt base made from 3D-printed parts and a pair of hobby servos. A webcam rides atop the gun and feeds into a PC running software that implements the YOLO3 localization algorithm. The program finds the cat, tracks its centroid, and swivels the gun to match it. If the cat stays in the no-go zone above the countertop for three seconds, he gets a dart in his general direction. [Alvaro] found that the noise of the gun tracking him was enough to send the cat scampering, proving that cats are capable of learning as long as it suits them.

We like this build and appreciate any attempt to bring order to the chaos a cat can bring to a household. It also puts us in mind of [Matthias Wandel]’s recent attempt to keep warm in his shop, although his detection algorithm was much simpler.

Continue reading “Keep Pesky Cats At Bay With A Machine-Learning Turret Gun”

AI Recognizes And Locks Out Murder Cats

Anyone with a cat knows that the little purring ball of fluff in your lap is one tiny step away from turning into a bloodthirsty serial killer. Give kitty half a chance and something small and defenseless is going to meet a slow, painful end. And your little killer is as likely as not to show off its handiwork by bringing home its victim – “Look what I did for you, human! Are you not proud?”

As useful as a murder-cat can be, dragging the bodies home for you to deal with can be – inconvenient. To thwart his adorable serial killer [Metric], Amazon engineer [Ben Hamm] turned to an AI system to lock his prey-laden cat out of the house. [Metric] comes and goes as he pleases through a cat flap, which thanks to a solenoid and an Arduino is now lockable. The decision to block entrance to [Metric] is based on an Amazon AWS DeepLens AI camera, which watches the approach to the cat flap. [Ben] trained three models: one to determine if [Metric] was in the scene, one to determine whether he’s coming or going, and one to see if he’s alone or accompanied by a lifeless friend, in which case he’s locked out for 15 minutes and an automatic donation is made to the Audubon Society – that last bit is pure genius. The video below is a brief but hilarious summary of the project for an audience in Seattle that really seems quite amused by the whole thing.

So your cat isn’t quite the murder fiend that [Metric] is? An RFID-based cat door might suit your needs better.

Continue reading “AI Recognizes And Locks Out Murder Cats”

Blisteringly Fast Machine Learning On An Arduino Uno

Even though machine learning AKA ‘deep learning’ / ‘artificial intelligence’ has been around for several decades now, it’s only recently that computing power has become fast enough to do anything useful with the science.

However, to fully understand how a neural network (NN) works, [Dimitris Tassopoulos] has stripped the concept down to pretty much the simplest example possible – a 3 input, 1 output network – and run inference on a number of MCUs, including the humble Arduino Uno. Miraculously, the Uno processed the network in an impressively fast prediction time of 114.4 μsec!

Whilst we did not test the code on an MCU, we just happened to have Jupyter Notebook installed so ran the same code on a Raspberry Pi directly from [Dimitris’s] bitbucket repo.

He explains in the project pages that now that the hype about AI has died down a bit that it’s the right time for engineers to get into the nitty-gritty of the theory and start using some of the ‘tools’ such as Keras, which have now matured into something fairly useful.

In part 2 of the project, we get to see the guts of a more complicated NN with 3-inputs, a hidden layer with 32 nodes and 1-output, which runs on an Uno at a much slower speed of 5600 μsec.

This exploration of ML in the embedded world is NOT ‘high level’ research stuff that tends to be inaccessible and hard to understand. We have covered Machine Learning On Tiny Platforms Like Raspberry Pi And Arduino before, but not with such an easy and thoroughly practical example.

Little Lamp To Learn Longer Leaps

Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.

Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.

Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.

[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.

Continue reading “Little Lamp To Learn Longer Leaps”

Nvidia Teaching Robots To Master IKEA Kitchens

The current wave of excitement around machine learning kicked off when graphics processors were repurposed to make training deep neural networks practical. Nvidia found themselves the engine of a new revolution and seized their opportunity to help push frontiers of research. Their research lab in Seattle will focus on one such field: making robots smart enough to work alongside humans in an IKEA kitchen.

Today’s robots are mostly industrial machines that require workspaces designed for robots. They run day and night, performing repetitive tasks, usually inside cages to keep squishy humans out of harm’s way. Robots will need to be a lot smarter about their surroundings before we could safely dismantle those cages. While there are some industrial robots making a start in this arena, they have a hard time justifying their price premium. (Example: financial difficulty of Rethink Robotics, who made the Baxter and Sawyer robots.)

So there’s a lot of room for improvement in this field, and this evolution will need a training environment offering tasks of varying difficulty levels for robots. Anywhere from the rigorous structured environment where robots work well today, to a dynamic unstructured environment where robots are hopelessly lost. Lab lead Dr. Dieter Fox explained how a kitchen is ideal. A meticulously cleaned and organized kitchen is very similar to an industrial setting. From there, we can gradually make a kitchen more challenging for a robot. For example: today’s robots can easily pick up a can with its rigid regular shape, but what about a half-full bag of flour? And from there, learn to pick up a piece of fresh fruit without bruising it. These tasks share challenges with many other tasks outside of a kitchen.

This isn’t about building a must-have home cooking robot, it’s about working through the range of challenges shared with common kitchen tasks. The lab has a lot of neat hardware, but its success will be measured by the software, and like all research, published results should be reproducible by other labs. You don’t have a high-end robotics lab in your house, but you do have a kitchen. That’s why it’s not just any kitchen, but an IKEA kitchen, to take advantage of the fact they are standardized, affordable, and available around the world for other robot researchers to benchmark against.

Most of us can experiment in a kitchen, IKEA or not. We have access to all the other tools we need: affordable AI hardware from Google, from Beaglebone, and from Nvidia. And we certainly have no shortage of robot arms and manipulators on these pages, ranging from a small laser-cut MeArm to our 2018 Hackaday Prize winner Dexter.

AI At The Edge Hack Chat

Join us Wednesday at noon Pacific time for the AI at the Edge Hack Chat with John Welsh from NVIDIA!

Machine learning was once the business of big iron like IBM’s Watson or the nearly limitless computing power of the cloud. But the power in AI is moving away from data centers to the edge, where IoT devices are doing things once unheard of. Embedded systems capable of running modern AI workloads are now cheap enough for almost any hacker to afford, opening the door to applications and capabilities that were once only science fiction dreams.

John Welsh is a Developer Technology Engineer with NVIDIA, a leading company in the Edge computing space. He’ll be dropping by the Hack Chat to discuss NVIDIA’s Edge offerings, like the Jetson Nano we recently reviewed. Join us as we discuss NVIDIA’s complete Jetson embedded AI product line up, getting started with Edge AI, and where Edge AI is headed.

join-hack-chat

Our Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, May 1 at noon Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Finding Plastic Spaghetti With Machine Learning

Among 3D printer owners, “spaghetti” is the common term for the tangled mess of stringy plastic that’s often the result of a failed print. Fear of their print bed turning into a hot plate of PLA spaghetti is enough to keep many users from leaving their machines operating overnight or while they’re out of the house. Accordingly, we’ve seen a number of methods that allow the human operator to watch their print remotely to make sure everything is progressing smoothly.

But unless you plan on keeping your eyes on your phone the entire time you’re out of the house, there’s still a chance some PETG pasta might sneak its way out. Enter the Spaghetti Detective, an open source project that lets machine learning take over when you can’t sit watching the printer all day. Their system plugs into Octoprint to monitor your print in real-time and pause it if it starts looking particularly stringy. The concept is still under development, but judging by the gallery of results submitted by users, the system seems to have a knack for identifying non-edible noodles.

Once the software comes out of beta it looks like the team is going to try to monetize it by providing hosting and monitoring services for a monthly fee, but as it’s an open source project, you’re also able to run the software on your own machine. Though the documentation notes that the lowly Raspberry Pi doesn’t have quite what it takes to handle the image recognition routines, so you’ll need a proper computer if you want to self-host the service. Could be a good use for that old laptop you’ve got kicking around the lab.

As demonstrated in the video after the break, the system’s “spaghetti confidence” is shown with a simple to understand gauge: green is a good-looking print, and red means the detective is getting a sniff of the stringy stuff. If your print dips into the red too much, Octoprint is commanded to pause the print. The user can then look at the last image from the printer and decide to either cancel the print entirely, or resume if the Spaghetti Detective got a little overzealous.

Frankly, it’s a brilliant idea and we’re very interested to see where it goes from here. Assuming you’ve got Octoprint controlling your 3D printer there are some very clever monitoring systems out there currently, but since spaghetti isn’t the only thing a rogue 3D printer can cook up, having an extra line of defense sounds like a good idea to us.

Continue reading “Finding Plastic Spaghetti With Machine Learning”