Machine Vision Keeps Track Of Grubby Hands

Can you remember everything you’ve touched in a given day? If you’re being honest, the answer is, “Probably not.” We humans are a tactile species, with an outsized proportion of both our motor and sensory nerves sent directly to our hands. We interact with the world through our hands, and unfortunately that may mean inadvertently spreading disease.

[Nick Bild] has a potential solution: a machine-vision system called Deep Clean, which monitors a scene and records anything in it that has been touched. [Nick]’s system uses Jetson Xavier and a stereo camera to detect depth in a scene; he built his camera from a pair of Raspberry Pi cams and a Pi 3B+, but other depth cameras like a Kinect could probably do the job. The idea is to watch the scene for human hands — OpenPose is the tool he chose for that job — and correlate their depth in the scene with the depth of objects. Touch a doorknob or a light switch, and a marker is left on the scene. The idea would be that a cleaning crew would be able to look at the scene to determine which areas need extra attention. We can think of plenty of applications that extend beyond the current crisis, as the ability to map areas that have been touched seems to be generally useful.

[Nick] has been getting some mileage out of that Xavier lately — he’s used it to build an AI umpire and shades that help you find lost stuff. Who knows what else he’ll find to do with them during this time of confinement?

Continue reading “Machine Vision Keeps Track Of Grubby Hands”

Silicone And AI Power This Prayerful Robotic Intercessor

Even in a world that is as currently far off the rails as this one is, we’re going to go out on a limb and say that this machine learning, servo-powered prayer bot is going to be the strangest thing you see today. We’re happy to be wrong about that, though, and if we are, please send links.

“The Prayer,” as [Diemut Strebe]’s work is called, may look strange, but it’s another in a string of pieces by various artists that explores just what it means to be human at a time when machines are blurring the line between them and us. The hardware is straightforward: a silicone rubber representation of a human nasopharyngeal cavity, servos for moving the lips, and a speaker to create the vocals. Those are generated by a machine-learning algorithm that was trained against the sacred texts of many of the world’s major religions, including the Christian Bible, the Koran, the Baghavad Gita, Taoist texts, and the Book of Mormon. The algorithm analyzes the structure of sacred verses and recreates random prayers and hymns using Amazon Polly that sound a lot like the real thing. That the lips move in synchrony with the ersatz devotions only adds to the otherworldliness of the piece. Watch it in action below.

We’ve featured several AI-based projects that poke at some interesting questions. This kinetic sculpture that uses machine learning to achieve balance comes to mind, while AI has even been employed in the search for spirits from the other side.

Continue reading “Silicone And AI Power This Prayerful Robotic Intercessor”

Now You Can Be Big Brother Too, With A Raspberry Pi License Plate Reader

If you are wowed by some of the abilities of a Tesla but can’t quite afford one, perhaps you can enhance your current ride with a few upgrades. This was what [Robert Lucian Chiriac] did with his Land Rover, to gain some insight into automotive machine vision he fitted it with a Raspberry Pi and camera with an automatic number plate recognition system.

This bracket should find a use in a few projects.
This bracket should find a use in a few projects.

His exceptionally comprehensive write-up takes us through the entire process, from creating a rather useful set of 3D-printed brackets for a Pi and camera through deciding the combination of artificial intelligence software components required, to making the eventual decision to offload part of the processing to a cloud service through a 4G mobile phone link. In this he used Cortex, a system designed for easy deployment of machine learning models, which he is very impressed with.

The result is a camera in his car that identifies and reads the plates on the vehicles around it. Which in a way has something of the Big Brother about it, but in another way points to a future in which ever more accessible AI applications self-contained without a cloud service become possible that aren’t quite so sinister.  It’s an inevitable progression whose privacy questions may go beyond a Hackaday piece, but it’s also a fascinating area of our remit that should be available at our level.

You can see the system in action in the video below the break, as well as find the code in his GitHub repository.

Continue reading “Now You Can Be Big Brother Too, With A Raspberry Pi License Plate Reader”

New Contest: Train All The Things

The old way was to write clever code that could handle every possible outcome. But what if you don’t know exactly what your inputs will look like, or just need a faster route to the final results? The answer is Machine Learning, and we want you to give it a try during the Train All the Things contest!

It’s hard to find a more buzz-worthy term than Artificial Intelligence. Right now, where the rubber hits the road in AI is Machine Learning and it’s never been so easy to get your feet wet in this realm.

From an 8-bit microcontroller to common single-board computers, you can do cool things like object recognition or color classification quite easily. Grab a beefier processor, dedicated ASIC, or lean heavily into the power of the cloud and you can do much more, like facial identification and gesture recognition. But the sky’s the limit. A big part of this contest is that we want everyone to get inspired by what you manage to pull off.

Yes, We Do Want to See Your ML “Hello World” Too!

Wait, wait, come back here. Have we already scared you off? Don’t read AI or ML and assume it’s not for you. We’ve included a category for “Artificial Intelligence Blinky” — your first attempt at doing something cool.

Need something simple to get you excited? How about Machine Learning on an ATtiny85 to sort Skittles candy by color? That uses just one color sensor for a quick and easy way to harvest data that forms a training set. But you could also climb up the ladder just a bit and make yourself a camera-based LEGO sorter or using an IMU in a magic wand to detect which spell you’re casting. Need more scientific inspiration? We’re hoping someday someone will build a training set that classifies microscope shots of micrometeorites. But we’d be equally excited with projects that tackle robot locomotion, natural language, and all the other wild ideas you can come up with.

Our guess is you don’t really need prizes to get excited about this one… most people have been itching for a reason to try out machine learning for quite some time. But we do have $100 Tindie gift certificates for the most interesting entry in each of the four contest categories: ML on the edge, ML on the gateway, AI blinky, and ML in the cloud.

Get started on your entry. The Train All The Things contest is sponsored by Digi-Key and runs until April 7th.

Hackaday Links: December 29, 2019

The retrocomputing crowd will go to great lengths to recreate the computers of yesteryear, and no matter which species of computer is being restored, getting it just right is a badge of honor in the community. The case and keyboard obviously playing a big part in that look, so when a crowdfunding campaign to create new keycaps for the C64 was announced, Commodore fans jumped to fund it. Sadly, more than four years later, the promised keycaps haven’t been delivered. One disappointed backer, Jim Drew, decided he was sick of waiting, so he delved into the world of keycaps injection molding and started his own competing campaign. Jim details his adventures in his Kickstarter Indiegogo campaign, which makes for good reading even if you’re not into Commodore refurbishment. Here’s hoping Jim has better luck than the competition did.

Looking for anonymity in our increasingly surveilled world? You’re not alone, and in fact, we predict facial recognition spoofing products and methods will be a growth industry in the new decade. Aside from the obvious – and often illegal – approach of wearing a mask that blocks most of the features machine learning algorithms use to quantify your face, one now has another option, in the form of a colorful pattern that makes you invisible to the YOLOv2 algorithm. The pattern, which looks like a soft-focus crowd scene rendered in Mardi Gras colors, won’t make the algorithm think you’re someone else, but it will prevent you from being classified as a person. It won’t work with any other AI algorithm, but it’s still an interesting phenomenon.

We saw a great hack come this week about using an RTL-SDR to track down a water leak. Clayton’s water bill suddenly skyrocketed, and he wanted to track down the source. Luckily, his water meter uses the encoder receive-transmit (ERT) protocol on the 900 MHz ISM band to report his usage, so he threw an SDR dongle and rtlamr at the problem. After logging his data, massaging it a bit with some Python code, and graphing water consumption over time, he found that water was being used even when nobody was home. That helped him find the culprit – leaky flap valves in the toilets resulting in a slow drip that ran up the bill. There were probably other ways to attack the problem, but we like this approach just fine.

Are your flex PCBs making you cry? Friend of Hackaday Drew Fustini sent us a tip on teardrop pads to reduce the mechanical stress on traces when the board flexes. The trouble is that KiCad can’t natively create teardrop pads. Thankfully an action plugin makes teardrops a snap. Drew goes into a bit of detail on how the plugin works and shows the results of some test PCBs he made with them. It’s a nice trick to keep in mind for your flexible design work.

Lego Machine Uses Machine Learning To Sort Itself Out

In our opinion, the primary evidence of a properly lived childhood is an enormous box of every conceivable Lego piece, from simple bricks to girders and gears, all with a small town’s worth of minifigs swimming through it. It takes years of birthdays and Christmases to accumulate a Lego collection best measured by the pound, but like anything worth doing, it’s worth overdoing.

But what to do with such a collection? Digging through it to find Just the Right Piece™ can be frustrating, and bringing order to the chaos with manual sorting is just so impractical. How about putting some of those bricks to work with a machine-vision Lego sorter built from Lego?

[Daniel West]’s approach is hardly new – we’ve even featured brick-built Lego sorters before – but we’re impressed by its architecture. First, the mechanical system is amazing. It uses a series of conveyors to transport bricks from a hopper, winnowing the stream down as it goes. The final step is a vibratory feeder that places one piece on a conveyor at a time. Those pass under a camera attached to a Raspberry Pi, where OpenCV does background subtraction from the video stream, applies bounding boxes to the parts, and runs the images through a convolutional neural network (CNN) that’s been trained on a database of every Lego part. Servo-controlled gates then direct the parts into one of 18 bins. See it in action in the video below.

We must admit that we’re not sure what the sorting criteria are, as some bins seem nearly as chaotic as the input mix. Still, we appreciate the fine engineering, and award extra style points for all the Lego goodness.

Continue reading “Lego Machine Uses Machine Learning To Sort Itself Out”

Simplified AI On Microcontrollers

Artificial intelligence is taking the world by storm. Rather than a Terminator-style apocalypse, though, it seems to be more of a useful tool for getting computers to solve problems on their own. This isn’t just for supercomputers, either. You can load AI onto some of the smallest microcontrollers as well. Tensorflow Lite is a popular tool for this, but getting it to work on your particular microcontroller can be a pain, unless you’re using an Espruino.

This project adds support for Tensorflow to this class of microcontrollers without having to fuss around with obtuse build tools. Basically adding a single line of code creates an instance, all without having to compile anything or even reboot. Tensorflow is a powerful software tool for microcontrollers, and having it this accessible now is a great leap forward.

So, what can you do with this tool? The team behind this build is using Tensorflow on an open smart watch that can be used to detect hand gestures and many other things. They also opened up these tools for use in a browser, which allows use of the AI software and emulates an Espruino without needing a physical device. There’s a lot going on with this one, and it’s a bonus that it’s open source and ready to be turned into anything you might need, like turning yourself into a Street Fighter.