Machine Learning App Remembers Names So You Don’t Have To

Depending on your point of view, real-life conversations with strangers can either be refreshing or terrifying. Some of us are glib and at ease in new social situations, while others are sure that the slightest flub will haunt them forever. And perhaps chief among these conversational faux pas is forgetting the name of the person who just introduced themselves a few seconds before.

Rather than commit himself to a jail of shame on such occasions, [Caleb] fought back with this only slightly creepy name-recalling smartphone app. The non-zero creep factor comes from the fact that, as [Caleb] points out, the app crosses lines that most of us would find unacceptable if Google or Amazon did it — like listening to your every conversation. It does this not to direct ads to you based on your conversations, but to fish out the name of your interlocutor from the natural flow of the conversation.

It turns out to be a tricky problem, even with the help of named-entity recognition (NER), which basically looks for the names of things in natural text. Apache OpenNLP, the NER library used here, works well at pulling out names, but figuring out whether they’re part of an introduction or just a bit of gossip about a third party is where [Caleb] put the bulk of his coding effort. That, and trying to make the whole thing at least a little privacy-respecting. See the video below for a demo.

To be sure, this doesn’t do much more than a simple, ‘remind me of your name again?’ would, but without the embarrassment. It’s still pretty cool though, and we’re especially jazzed to learn about NER and the tons of applications for it. Those are projects for a future day, though. We’re just glad to see that [Caleb] has moved on from monitoring the bodily functions of his dog and his kid. At least for now.

Continue reading “Machine Learning App Remembers Names So You Don’t Have To”

Truthsayer Uses Facial Recognition To See If You’re Telling The Truth

It’s hard to watch [Mark Zuckerberg]’s 2018 Congressional testimony and not come to the conclusion that he is, at a minimum, quite a bit different than the average person. Of course, having built a multibillion-dollar company that drastically changed everything about the way people communicate is pretty solid evidence of that, but the footage at least made a fun test case for this AI truth-detecting algorithm.

Now, we’re not saying that anyone in these videos was lying, and neither is [Fletcher Heisler]. His algorithm, which analyzes video of a person and uses machine vision to pick up cues that might be associated with the stress of untruthfulness, is far from perfect. But as the first video below shows, it is a lot of fun to see it at work. The idea is to capture data like pulse rate, gaze direction, blink rate, mouth posture, and even hand position and use them as a proxy for lying. The second video, from [Fletcher]’s recent DEFCON talk, has much more detail.

The key to all this is finding human faces in a video — a task that seemed to fail suspiciously frequently when [Zuck] was on camera — using OpenCV and MediaPipe’s Face Mesh. The subject’s pulse is detected by watching for subtle changes in the color of a subject’s cheeks as blood flows through them, which we’ve heard about plenty of times but never before seen presented so clearly and executed so simply. Gaze direction, blinking, and lip compression are fairly easy to detect too. [Fletcher] also threw in the FER library for facial expression recognition, to get an idea of the subject’s mood. Together, these cues form a rough estimate of the subject’s truthiness, which [Fletcher] is quick to point out is just for entertainment purposes and totally shouldn’t be used on your colleagues on the next Zoom call.

Does [Fletcher]’s facial mesh look familiar? It should, since we once watched him twitch his way through a coding interview.

Continue reading “Truthsayer Uses Facial Recognition To See If You’re Telling The Truth”

Machine Learning Gives Cats One More Way To Control Their Humans

For those who choose to let their cats live a more or less free-range life, there are usually two choices. One, you can adopt the role of servant and run for the door whenever the cat wants to get back inside from their latest bird-murdering jaunt. Or two, install a cat door and let them come and go as they please, sometimes with a “present” for you in their mouth. Heads you win, tails you lose.

There’s another way, though: just let the cat ask to be let back in. That’s the approach that [Tennis Smith] took with this machine-learning kitty doorbell. It’s based on a Raspberry Pi 4, which lives inside the house, and a USB microphone that’s outside the front door. The Pi uses Tensorflow Lite to classify the sounds it picks up outside, and when one of those sounds fits the model of a cat’s meow, a message is dispatched to AWS Lambda. From there a text message is sent to alert [Tennis] that the cat is ready to come back in.

There’s a ton of useful information included in the repo for this project, including step-by-step instructions for getting Amazon Web Services working on the Pi. If you’re a dog person, fear not: changing from meows to barks is as simple as tweaking a single line of code. And if you’d rather not be at the beck and call of a cat but still want to avoid the evidence of a prey event on your carpet, machine learning can help with that too.

[via Tom’s Hardware]

Machine Learning Baby Monitor Prevents The Hunger Games

Newborn babies can be tricky to figure out, especially for first-time parents. Despite the abundance of unsolicited advice proffered by anyone who ever had a baby before — and many who haven’t — most new parents quickly get in sync with the baby’s often ambiguous signals. But [Caleb] took his observations of his newborn a step further and built a machine-learning hungry baby early warning system that’s pretty slick.

Normally, babies are pretty unsubtle about being hungry, and loudly announce their needs to the world. But it turns out that crying is a lagging indicator of hunger, and that there are a host of face, head, and hand cues that precede the wailing. [Caleb] based his system on Google’s MediaPipe library, using his baby monitor’s camera to track such behaviors as lip smacking, pacifier rejection, fist mouthing, and rooting, all signs that someone’s tummy needs filling. By putting together a system to recognize these cues and assign a weight to them, [Caleb] now gets a text before the baby gets to the screaming phase, to the benefit of not only the little nipper but to his sleep-deprived servants as well. The video below has some priceless bits in it; don’t miss [Baby Caleb] at 5:11 or the hilarious automatic feeder gag at the end.

We’ve seen some interesting videos from [Caleb] recently, mostly having to do with his dog’s bathroom habits and getting help cleaning up afterward. We can only guess how those projects will be leveraged when this kid gets a little older and starts potty training.

Continue reading “Machine Learning Baby Monitor Prevents The Hunger Games”

Computer Vision Extracts Lightning From Footage

Lightning is one of the more mysterious and fascinating phenomenon on the planet. Extremely powerful, but each strike on average only has enough energy to power an incandescent bulb for an hour. The exact mechanism that starts a lightning strike is still not well understood. Yet it happens 45 times per second somewhere on the planet. While we may not gain a deeper scientific appreciation of lightning anytime soon, but we can capture it in various photography thanks to this project which leverages computer vision machine learning to pull out the best frames of lightning.

The project’s creator, [Liam], built this as a tool for stormchasers and photographers so that they can film large amounts of time and not have to go back through their footage manually to pull out the frames with lightning strikes. The project borrows from a similar project, but this one adds Python 3 capabilities and runs on a tiny netbook for more easy field deployment. It uses OpenCV for object recognition, using video files as the source data, and features different modes to recognize different types of lightning.

The software is free and open source, and releases are supported for both Windows and Linux. So far, [Liam] has been able to capture all kinds of electrical atmospheric phenomenon with it including lightning, red sprites, and elves. We don’t see too many projects involving lightning around here, partly because humans can only generate a fraction of the voltage potential needed for the average lightning strike.

Food Irradiation Detector Doesn’t Use Banana For Scale

How do the potatoes in that sack keep from sprouting on their long trip from the field to the produce section? Why don’t the apples spoil? To an extent, the answer lies in varying amounts of irradiation. Though it sounds awful, irradiation reduces microbial contamination, which improves shelf life. Most people can choose to take it or leave it, but in some countries, they aren’t overly concerned about the irradiation dosages found in, say, animal feed. So where does that leave non-vegetarians?

If that line of thinking makes you want to Hulk out, you’re not alone. [kutluhan_aktar] decided to build an IoT food irradiation detector in an effort to help small businesses make educated choices about the feed they give to their animals. The device predicts irradiation dosage level using a combination of the food’s weight, color, and emitted ionizing radiation after being exposed to sunlight for an appreciable amount of time. Using this information, [kutluhan_aktar] trained a neural network running on a Beetle ESP32-C3 to detect the dosage and display relevant info on a transparent OLED screen. Primarily, the device predicts whether the dosage falls into the Regulated, Unsafe, or just plain Hazardous category.

[kutluhan_aktar] lets this baby loose on some uncooked pasta in the short demo video after the break. The macaroni is spread across a load cell to detect the weight, while [kutluhan_aktar] uses a handheld sensor to determine the color.

This isn’t the first time we’ve seen AI on the Hackaday menu. Remember when we tried those AI-created recipes?

Machine Learning Does Its Civic Duty By Spotting Roadside Litter

If there’s one thing that never seems to suffer from supply chain problems, it’s litter. It’s everywhere, easy to spot and — you’d think — pick up. Sadly, most of us seem to treat litter as somebody else’s problem, but with something like this machine vision litter mapper, you can at least be part of the solution.

For the civic-minded [Nathaniel Felleke], the litter problem in his native San Diego was getting to be too much. He reasoned that a map of where the trash is located could help municipal crews with cleanup, so he set about building a system to search for trash automatically. Using Edge Impulse and a collection of roadside images captured from a variety of sources, he built a model for recognizing trash. To find the garbage, a webcam with a car window mount captures images while driving, and a Raspberry Pi 4 runs the model and looks for garbage. When roadside litter is found, the Pi uses a Blues Wireless Notecard to send the GPS location of the rubbish to a cloud database via its cellular modem.

Cruising around the streets of San Diego, [Nathaniel]’s system builds up a database of garbage hotspots. From there, it’s pretty straightforward to pull the data and overlay it on Google Maps to create a heatmap of where the garbage lies. The video below shows his system in action.

Yes, driving around a personal vehicle specifically to spot litter is just adding more waste to the mix, but you’d imagine putting something like this on municipal vehicles that are already driving around cities anyway. Either way, we picked up some neat tips, especially those wireless IoT cards. We’ve seen them used before, but [Nathaniel]’s project gives us a path forward on some ideas we’ve had kicking around for a while.

Continue reading “Machine Learning Does Its Civic Duty By Spotting Roadside Litter”