Data Mining Home Water Usage; Your Water Meter Knows You A Bit Too Well

The average person has become depressingly comfortable with the surveillance dystopia we live in. For better or for worse, they’ve come to accept the fact that data about their lives is constantly being collected and analyzed. We’re at the point where a sizable chunk of people believe their smartphone is listening in on their personal conversations and tailoring advertisements to overheard keywords, yet it’s unlikely they’re troubled enough by the idea that they’d actually turn off the phone.

But even the most privacy-conscious among us probably wouldn’t consider our water usage to be any great secret. After all, what could anyone possibly learn from studying how much water you use? Well, as [Jason Bowling] has proven with his fascinating water-meter data research, it turns out you can learn a whole hell of a lot by watching water use patterns. By polling a whole-house water flow meter every second and running the resulting data through various machine learning algorithms, [Jason] found there is a lot of personal information hidden in this seemingly innocuous data stream.

The key is that every water-consuming device in your home has a discernible “fingerprint” that, with enough time, can be identified and tracked. Appliances that always use the same amount of water, like an ice maker or dishwasher, are obvious spikes among the noise. But [Jason] was able to pick up even more subtle differences, such as which individual toilet in the home had been flushed and when.

Further, if you watch the data long enough, you can even start to identify information about individuals within the home. Want to know how many kids are in the family? Monitoring for frequent baths that don’t fill the tub all the way would be a good start. Want to know how restful somebody’s sleep was? A count of how many times the toilet was flushed overnight could give you an idea.

In terms of the privacy implications of what [Jason] has discovered, we’re mildly horrified. Especially since we’ve already seen how utility meters can be sniffed with nothing more exotic than an RTL-SDR. But on the other hand, his write-up is a fantastic look at how you can put machine learning to work in even the most unlikely of applications. The information he’s collected on using Python to classify time series data and create visualizations will undoubtedly be of interest to anyone who’s got a big data problem they’re looking to solve.

DIY Personal Assistant Robot Hears And Sees All

Who wouldn’t want a robot that can fetch them a glass of water? [Saral Tayal] didn’t just think that, he jumped right in and built his own personal assistant robot. This isn’t just some remote-controlled rover though. The robot actually listens to his voice and recognizes his face.

The body of the robot is the common “Rover 5” platform, to which [Saral] added a number of 3D printed parts. A forklift like sled gives the robot the ability to pick things up. Some of the parts are more about form than function – [Saral] loves NASA’s Spirit and Opportunity Mars rovers, so he added some simulated solar cells and other greebles.

The Logitech webcam up front is very functional — images are fed to machine learning models, while audio is processed to listen for commands. This robot can find and pick up 90 unique objects.

The robot’s brains are a Raspberry Pi. It uses TensorFlow for object recognition. Some of the models [Saral] is using are pretty large – so big that the Pi could only manage a couple of frames per second at 100% CPU utilization. A Google Coral coprocessor sped things up quite a bit, while only using about 30% of the Pi’s processor.

It takes several motors to control to robot’s tracks and sled. This is handled by two Roboclaw motor controllers which themselves are commanded by the Pi.

We’ve seen quite a few mobile robot rovers over the years, but [Saral’s] ‘bot is one of the most functional designs out there. Even better is the fact that it is completely open source. You can find the code and 3D models on his GitHub repo.

Check out a video of the personal assistant rover in action after the break.

Continue reading “DIY Personal Assistant Robot Hears And Sees All”

Keep Pesky Cats At Bay With A Machine-Learning Turret Gun

It doesn’t take long after getting a cat in your life to learn who’s really in charge. Cats do pretty much what they want to do, when they want to do it, and for exactly as long as it suits them. Any correlation with your wants and needs is strictly coincidental, and subject to change without notice, because cats.

[Alvaro Ferrán Cifuentes] almost learned this the hard way, when his cat developed a habit of exploring the countertops in his kitchen and nearly turned on the cooktop while he was away. To modulate this behavior, [Alvaro] built this AI Nerf turret gun. The business end of the system is just a gun mounted on a pan-tilt base made from 3D-printed parts and a pair of hobby servos. A webcam rides atop the gun and feeds into a PC running software that implements the YOLO3 localization algorithm. The program finds the cat, tracks its centroid, and swivels the gun to match it. If the cat stays in the no-go zone above the countertop for three seconds, he gets a dart in his general direction. [Alvaro] found that the noise of the gun tracking him was enough to send the cat scampering, proving that cats are capable of learning as long as it suits them.

We like this build and appreciate any attempt to bring order to the chaos a cat can bring to a household. It also puts us in mind of [Matthias Wandel]’s recent attempt to keep warm in his shop, although his detection algorithm was much simpler.

Continue reading “Keep Pesky Cats At Bay With A Machine-Learning Turret Gun”

AI Recognizes And Locks Out Murder Cats

Anyone with a cat knows that the little purring ball of fluff in your lap is one tiny step away from turning into a bloodthirsty serial killer. Give kitty half a chance and something small and defenseless is going to meet a slow, painful end. And your little killer is as likely as not to show off its handiwork by bringing home its victim – “Look what I did for you, human! Are you not proud?”

As useful as a murder-cat can be, dragging the bodies home for you to deal with can be – inconvenient. To thwart his adorable serial killer [Metric], Amazon engineer [Ben Hamm] turned to an AI system to lock his prey-laden cat out of the house. [Metric] comes and goes as he pleases through a cat flap, which thanks to a solenoid and an Arduino is now lockable. The decision to block entrance to [Metric] is based on an Amazon AWS DeepLens AI camera, which watches the approach to the cat flap. [Ben] trained three models: one to determine if [Metric] was in the scene, one to determine whether he’s coming or going, and one to see if he’s alone or accompanied by a lifeless friend, in which case he’s locked out for 15 minutes and an automatic donation is made to the Audubon Society – that last bit is pure genius. The video below is a brief but hilarious summary of the project for an audience in Seattle that really seems quite amused by the whole thing.

So your cat isn’t quite the murder fiend that [Metric] is? An RFID-based cat door might suit your needs better.

Continue reading “AI Recognizes And Locks Out Murder Cats”

Blisteringly Fast Machine Learning On An Arduino Uno

Even though machine learning AKA ‘deep learning’ / ‘artificial intelligence’ has been around for several decades now, it’s only recently that computing power has become fast enough to do anything useful with the science.

However, to fully understand how a neural network (NN) works, [Dimitris Tassopoulos] has stripped the concept down to pretty much the simplest example possible – a 3 input, 1 output network – and run inference on a number of MCUs, including the humble Arduino Uno. Miraculously, the Uno processed the network in an impressively fast prediction time of 114.4 μsec!

Whilst we did not test the code on an MCU, we just happened to have Jupyter Notebook installed so ran the same code on a Raspberry Pi directly from [Dimitris’s] bitbucket repo.

He explains in the project pages that now that the hype about AI has died down a bit that it’s the right time for engineers to get into the nitty-gritty of the theory and start using some of the ‘tools’ such as Keras, which have now matured into something fairly useful.

In part 2 of the project, we get to see the guts of a more complicated NN with 3-inputs, a hidden layer with 32 nodes and 1-output, which runs on an Uno at a much slower speed of 5600 μsec.

This exploration of ML in the embedded world is NOT ‘high level’ research stuff that tends to be inaccessible and hard to understand. We have covered Machine Learning On Tiny Platforms Like Raspberry Pi And Arduino before, but not with such an easy and thoroughly practical example.

Little Lamp To Learn Longer Leaps

Reinforcement learning is a subset of machine learning where the machine is scored on their performance (“evaluation function”). Over the course of a training session, behavior that improved final score is positively reinforced gradually building towards an optimal solution. [Dheera Venkatraman] thought it would be fun to use reinforcement learning for making a little robot lamp move. But before that can happen, he had to build the hardware and prove its basic functionality with a manual test script.

Inspired by the hopping logo of Pixar Animation Studios, this particular form of locomotion has a few counterparts in the natural world. But hoppers of the natural world don’t take the shape of a Luxo lamp, making this project an interesting challenge. [Dheera] published all of his OpenSCAD files for this 3D-printed lamp so others could join in the fun. Inside the lamp head is a LED ring to illuminate where we expect a light bulb, while also leaving room in the center for a camera. Mechanical articulation servos are driven by a PCA9685 I2C PWM driver board, and he has written and released code to interface such boards with Robot Operating System (ROS) orchestrating our lamp’s features. This completes the underlying hardware components and associated software foundations for this robot lamp.

Once all the parts have been printed, electronics wired, and everything assembled, [Dheera] hacked together a simple “Hello World” script to verify his mechanical design is good enough to get started. The video embedded after the break was taken at OSH Park’s Bring-A-Hack afterparty to Maker Faire Bay Area 2019. This motion sequence was frantically hand-coded in 15 minutes, but these tentative baby hops will serve as a great baseline. Future hopping performance of control algorithms trained by reinforcement learning will show how far this lamp has grown from this humble “Hello World” hop.

[Dheera] had previously created the shadow clock and is no stranger to ROS, having created the ROS topic text visualization tool for debugging. We will be watching to see how robot Luxo will evolve, hopefully it doesn’t find a way to cheat! Want to play with reinforcement learning, but prefer wheeled robots? Here are a few options.

Continue reading “Little Lamp To Learn Longer Leaps”

Nvidia Teaching Robots To Master IKEA Kitchens

The current wave of excitement around machine learning kicked off when graphics processors were repurposed to make training deep neural networks practical. Nvidia found themselves the engine of a new revolution and seized their opportunity to help push frontiers of research. Their research lab in Seattle will focus on one such field: making robots smart enough to work alongside humans in an IKEA kitchen.

Today’s robots are mostly industrial machines that require workspaces designed for robots. They run day and night, performing repetitive tasks, usually inside cages to keep squishy humans out of harm’s way. Robots will need to be a lot smarter about their surroundings before we could safely dismantle those cages. While there are some industrial robots making a start in this arena, they have a hard time justifying their price premium. (Example: financial difficulty of Rethink Robotics, who made the Baxter and Sawyer robots.)

So there’s a lot of room for improvement in this field, and this evolution will need a training environment offering tasks of varying difficulty levels for robots. Anywhere from the rigorous structured environment where robots work well today, to a dynamic unstructured environment where robots are hopelessly lost. Lab lead Dr. Dieter Fox explained how a kitchen is ideal. A meticulously cleaned and organized kitchen is very similar to an industrial setting. From there, we can gradually make a kitchen more challenging for a robot. For example: today’s robots can easily pick up a can with its rigid regular shape, but what about a half-full bag of flour? And from there, learn to pick up a piece of fresh fruit without bruising it. These tasks share challenges with many other tasks outside of a kitchen.

This isn’t about building a must-have home cooking robot, it’s about working through the range of challenges shared with common kitchen tasks. The lab has a lot of neat hardware, but its success will be measured by the software, and like all research, published results should be reproducible by other labs. You don’t have a high-end robotics lab in your house, but you do have a kitchen. That’s why it’s not just any kitchen, but an IKEA kitchen, to take advantage of the fact they are standardized, affordable, and available around the world for other robot researchers to benchmark against.

Most of us can experiment in a kitchen, IKEA or not. We have access to all the other tools we need: affordable AI hardware from Google, from Beaglebone, and from Nvidia. And we certainly have no shortage of robot arms and manipulators on these pages, ranging from a small laser-cut MeArm to our 2018 Hackaday Prize winner Dexter.