Truthsayer Uses Facial Recognition To See If You’re Telling The Truth

It’s hard to watch [Mark Zuckerberg]’s 2018 Congressional testimony and not come to the conclusion that he is, at a minimum, quite a bit different than the average person. Of course, having built a multibillion-dollar company that drastically changed everything about the way people communicate is pretty solid evidence of that, but the footage at least made a fun test case for this AI truth-detecting algorithm.

Now, we’re not saying that anyone in these videos was lying, and neither is [Fletcher Heisler]. His algorithm, which analyzes video of a person and uses machine vision to pick up cues that might be associated with the stress of untruthfulness, is far from perfect. But as the first video below shows, it is a lot of fun to see it at work. The idea is to capture data like pulse rate, gaze direction, blink rate, mouth posture, and even hand position and use them as a proxy for lying. The second video, from [Fletcher]’s recent DEFCON talk, has much more detail.

The key to all this is finding human faces in a video — a task that seemed to fail suspiciously frequently when [Zuck] was on camera — using OpenCV and MediaPipe’s Face Mesh. The subject’s pulse is detected by watching for subtle changes in the color of a subject’s cheeks as blood flows through them, which we’ve heard about plenty of times but never before seen presented so clearly and executed so simply. Gaze direction, blinking, and lip compression are fairly easy to detect too. [Fletcher] also threw in the FER library for facial expression recognition, to get an idea of the subject’s mood. Together, these cues form a rough estimate of the subject’s truthiness, which [Fletcher] is quick to point out is just for entertainment purposes and totally shouldn’t be used on your colleagues on the next Zoom call.

Does [Fletcher]’s facial mesh look familiar? It should, since we once watched him twitch his way through a coding interview.

Continue reading “Truthsayer Uses Facial Recognition To See If You’re Telling The Truth”

Purpose-Built Plotter Pitches In To Solve Wordblitz On Your Phone

It seems like most hackers have never played a game without at least wondering how to cheat at it. It’s not that we’re a dishonest lot, at least not as a rule. It’s more that most games hold less challenge for us than does figuring out how to reverse engineer the game’s mechanics. We don’t intend to cheat; it just sort of happens.

Or at least that’s the charitable way to look at such smartphone game cheats as this automated word-search puzzle solver. The game is Wordblitz, which is basically an implementation of classic Boggle along with extra features to release more dopamine and keep you playing. Not one to fall for that trick, [ghettobastler] whipped up a quick X-Y gantry from MDF using a laser cutter, added a stylus in the form of a cotton swab tipped with aluminum foil, and a vision system based on a simple web camera. The bed of the gantry has a capacitive plate so the stylus can operate the phone, along with a frame of ArUco fiducial marker to aid in locating the phone.

A Raspberry Pi handles the machine vision part of the process, which uses OpenCV to estimate the phone’s location and extract the current game tiles. The words in the game field are located by a solver that [ghetto] had previously written; a script then streams G-code to the plotter to peck out the answers at blazing speed, or at least faster than even [Peggy Hill] could manage. See the video below for a sample game being solved.

One word of warning if you choose to build this: [ghettobastler]’s puzzle-solving algorithm is based on a French dictionary, so you’ll have to re-teach it for other languages. But whatever language it’s in, this reminds us a bit of some of the Wordle solvers we’ve seen recently.

Continue reading “Purpose-Built Plotter Pitches In To Solve Wordblitz On Your Phone”

This Machine-Vision Ekranoplan Might Just Follow You Home

What is it that’s not quite either a plane or a boat, but has characteristics of both? There are probably a lot of things that fit that description, but the one that [Nick Rehm] is working on is known as an ekranoplan. Specifically, he’s looking to make the surface-skimming ground-effect vehicle operate autonomously.

If you think you’ve heard about ekranoplans around here before, you’d be right — we’ve covered a cool LIDAR-controlled model ekranoplan that [rctestflight] worked on about a year ago, and more recently, [ThinkFlight]’s attempts to make an autonomous ekranoplan that can follow behind a boat. The latter is where [Nick] enters the collaboration, and the featherweight foam ground-effect vehicle shown in the video below is his test platform.

After sorting out the basic airframe design and getting the LIDAR integrated, he turned his attention to the autonomous bit, which relies on a Raspberry Pi 4 running ROS and a camera with a wide-angle lens. The Pi uses machine vision algorithms to find an “AprilTag” fiducial marker in the scene, which gives the flight controller information about the relative orientation of the ekranoplan to the tag. [Nick] tested tag tracking using an electric longboard, and the model ekranoplan did an admirable job of not only managing the ground-effect, but also staying on target right behind him. And hats off to [Nick] for keeping all the balls in the air and not breaking his neck in the process.

We’re looking forward to seeing what [Nick] built here end up in [ThinkFlight]’s big ekranoplan build. Ground-effect vehicles like these are undeniably cool, and it seems like they’ve got the potential to solve some interesting transportation problems.

Continue reading “This Machine-Vision Ekranoplan Might Just Follow You Home”

a 3D printed box with a Terminator head watching a camera

Machine Vision Helps You Terminate Failing 3D Print Jobs

If you’re a 3D printer user you’re probably familiar with that dreaded feeling of returning to your printer a few hours after submitting a big job, only to find that it threw an error and stopped printing, or worse, turned half a spool of filament into a useless heap of twisted plastic. While some printers come with remote monitoring facilities, [Kutluhan Aktar]’s doesn’t, so he built a device that keeps a watchful eye on his 3D printer and notifies him if anything’s amiss.

a 3D printed box with a Terminator head watching a cameraThe device does this by tracking the movement of the print head using a camera and looking for any significant changes in motion. If, for example, the Y-axis suddenly stops moving and doesn’t resume within a reasonable amount of time, it will generate a warning message and send it to its owner through Telegram. If all three axes stop moving, then either the print is finished or some serious error occurred, both of which require user intervention.

The camera [Kutluhan] used is a HuskyLens AI camera that can detect objects and output a set of 3D coordinates describing their motion. A set of QR-like AprilTags attached to the moving parts of the 3D printer help the camera to identify the relevant components. The software runs on a Raspberry Pi housed in a 3D-printed enclosure with a T-800 Terminator head on top to give it a bit of extra presence.

[Kutluhan]’s description of the project covers lots of detail on how to set up the camera and hook it up to a Telegram bot that enables it to send automated messages, so it’s an interesting read even if you’re not planning to 3D print something to check on your 3D printer. After all, software like Octoprint has many similar features, but having an independent observer can still be a good safety feature to prevent some types of catastrophic failure.

Continue reading “Machine Vision Helps You Terminate Failing 3D Print Jobs”

Camera held in hand

Review: Vizy Linux-Powered AI Camera

Vizy is a Linux-based “AI camera” based on the Raspberry Pi 4 that uses machine learning and machine vision to pull off some neat tricks, and has a design centered around hackability. I found it ridiculously simple to get up and running, and it was just as easy to make changes of my own, and start getting ideas.

Person and cat with machine-generated tags identifying them
Out of the box, Vizy is only a couple lines of Python away from being a functional Cat Detector project.

I was running pre-installed examples written in Python within minutes, and editing that very same code in about 30 seconds more. Even better, I did it all without installing a development environment, or even leaving my web browser, for that matter. I have to say, it made for a very hacker-friendly experience.

Vizy comes from the folks at Charmed Labs; this isn’t their first stab at smart cameras, and it shows. They also created the Pixy and Pixy 2 cameras, of which I happen to own several. I have always devoured anything that makes machine vision more accessible and easier to integrate into projects, so when Charmed Labs kindly offered to send me one of their newest devices, I was eager to see what was new.

I found Vizy to be a highly-polished platform with a number of truly useful hardware and software features, and a focus on accessibility and ease of use that I really hope to see more of in future embedded products. Let’s take a closer look.

Continue reading “Review: Vizy Linux-Powered AI Camera”

Wordle bot

Solving Wordle By Adding Machine Vision To A 3D Printer

Truth be told, we haven’t jumped on the Wordle bandwagon yet, mainly because we don’t need to be provided with yet another diversion — we’re more than capable of finding our own rabbit holes to fall down, thank you very much. But the word puzzle does look intriguing, and since the rules and the interface are pretty simple, it’s no wonder we’ve seen a few efforts like this automated Wordle solver crop up lately.

The goal of Wordle is to find a specific five-letter, more-or-less-common English word in as few guesses as possible. Clues are given at each turn in the form of color-coding the letters to indicate whether they appear in the word and in what order. [iamflimflam1]’s approach was to attach a Raspberry Pi camera over the bed of a 3D printer and attach a phone stylus in place of the print head. A phone running Wordle is placed on the printer bed, and Open CV is used to find both the screen of the phone, as well as the position of the phone on the printer bed. From there, the robot uses the stylus to enter an opening word, analyzes the colors of the boxes, and narrows in on a solution.

The video below shows the bot in use, and source code is available if you want to try it yourself. If you need a deeper dive into Wordle solving algorithms, and indeed other variant puzzles in the *dle space, check out this recent article on reverse engineering the popular game.

Continue reading “Solving Wordle By Adding Machine Vision To A 3D Printer”

Invisible 3D Printed Codes Make Objects Interactive

An interesting research project out of MIT shows that it’s possible to embed machine-readable labels into 3D printed objects using nothing more than an FDM printer and filament that is transparent to IR. The method is being called InfraredTags; by embedding something like a QR code or ArUco markers into an object’s structure, that label can be detected by a camera and interactive possibilities open up.

One simple proof of concept is a wireless router with its SSID embedded into the side of the device, and the password embedded into a different code on the bottom to ensure that physical access is required to obtain the password. Mundane objects can have metadata embedded into them, or provide markers for augmented reality functionality, like tracking the object in 3D.

How are the codes actually embedded? The process is straightforward with the right tools. The team used a specialty filament from vendor 3dk.berlin that looks nearly opaque in the visible spectrum, but transmits roughly 45% in IR.  The machine-readable label gets embedded within the walls of a printed object either by using a combination of IR PLA and air gaps to represent the geometry of the code, or by making a multi-material print using IR PLA and regular (non-IR transmitting) PLA. Both provide enough contrast for an IR-sensitive camera to detect the label, although the multi-material version works a little better overall. Sadly, the average mobile phone camera by itself isn’t sufficiently IR-sensitive to passively read these embedded tags, so the research used easily available cameras with no IR-blocking filters, like the Raspberry Pi NoIR.

The PDF has deeper details of the implementation for those of you who want to know more, and you can see a demonstration of a few different applications in the video, embedded below. Determining the provenance of 3D printed objects is a topic of some debate in the industry, and it’s not hard to see how technology like this could be used to covertly identify objects without compromising their appearance.

Continue reading “Invisible 3D Printed Codes Make Objects Interactive”