Real Or Fake? Robot Uses AI To Find Waldo

The last few weeks have seen a number of tech sites reporting on a robot which can find and point out Waldo in those “Where’s Waldo” books. Designed and built by Redpepper, an ad agency. The robot arm is a UARM Metal, with a Raspberry Pi controlling the show.

A Logitech c525 webcam captures images, which are processed by the Pi with OpenCV, then sent to Google’s cloud-based AutoML Vision service. AutoML is trained with numerous images of Waldo, which are used to attempt a pattern match.  If a pattern is found, the coordinates are fed to PYUARM, and the UARM will literally point Waldo out.

While this is a totally plausible project, we have to admit a few things caught our jaundiced eye. The Logitech c525 has a field of view (FOV) of 69°. While we don’t have dimensions of the UARM Metal, it looks like the camera is less than a foot in the air. Amazon states that “Where’s Waldo Delux Edition” is 10″ x 0.2″ x 12.5″ inches. That means the open book will be 10″ x 25″. The robot is going to have a hard time imaging a surface that large in a single image. What’s more, the c525 is a 720p camera, so there isn’t a whole lot of pixel density to pattern match. Finally, there’s the rubber hand the robot uses to point out Waldo. Wouldn’t that hand block at least some of the camera’s view to the left?

We’re not going to jump out and call this one fake just yet — it is entirely possible that the robot took a mosaic of images and used that to pattern match. Redpepper may have used a bit of movie magic to make the process more interesting. What do you think? Let us know down in the comments!

Bringing Augmented Reality To The Workbench

[Ted Yapo] has big ideas for using Augmented Reality as a tool to enhance an electronics workbench. His concept uses a camera and projector system working together to detect objects on a workbench, and project information onto and around them. [Ted] envisions virtual displays from DMMs, oscilloscopes, logic analyzers, and other instruments projected onto a convenient place on the actual work area, removing the need to glance back and forth between tools and the instrument display. That’s only the beginning, however. A good camera and projector system could read barcodes on component bags to track inventory, guide manual PCB assembly by projecting which components go where, display reference data, and more.

An open-sourced, accessible machine vision system working in tandem with a projector would open a lot of doors. Fortunately [Ted] has prior experience in this area, having previously written the computer vision code for room-scale dynamic projection environments. That’s solid experience that he can apply to designing a workbench-scale system as his entry for The Hackaday Prize.

Rubik’s Robot So Fast It Looks Like A Glitch In The Matrix

From Ferraris to F-16s, some things just look fast. This Rubik’s Cube solving robot not only looks fast, it is fast: it solved a standard cube in 380 milliseconds. Blink during the video below and you’ll miss it — even on the high-speed we had trouble keeping track of the number of moves this solution took. It looked like about 20.

Beating the previous robot record of 637 milliseconds is just the icing on the cake of a very cool build undertaken by [Ben Katz]. He and his collaborator [Jared] put together a robot with a decidedly industrial look — aluminum extrusion chassis, six pancake servo motors with high-precision optical encoders, and polycarbonate panels for explosion containment which proved handy during development. The motors had to be modified to allow the encoders to be attached to the rear, and custom motor controllers were fabricated. [Jared] came up with a unique board to synchronize the six motors and prevent collisions between faces. Machine vision is provided by just two PlayStation Eye cameras; mounted at opposite corners of the enclosure, each camera can see three faces at a time. They had a little trouble distinguishing the red from the orange, which was solved with a Sharpie.

[Ben] and [Jared] think they can shave a few milliseconds here and there with tweaks, but even as it is, this is a great lesson in optimization and integration. We’ve covered Rubik’s robots before, like this two-motor slow and steady design and this six-motor build that solves a cube in less than a second.

Continue reading “Rubik’s Robot So Fast It Looks Like A Glitch In The Matrix”

JeVois Machine Vision Camera Nails Demo Mode

JeVois is a small, open-source, smart machine vision camera that was funded on Kickstarter in early 2017. I backed it because cameras that embed machine vision elements are steadily growing more capable, and JeVois boasts an impressive range of features. It runs embedded Linux and can process video at high frame rates using OpenCV algorithms. It can run standalone, or as a USB camera streaming raw or pre-processed video to a host computer for further action. In either case it can communicate to (and be controlled by) other devices via serial port.

But none of that is what really struck me about the camera when I received my unit. What really stood out was the demo mode. The team behind JeVois nailed an effective demo mode for a complex device. That didn’t happen by accident, and the results are worth sharing.

Continue reading “JeVois Machine Vision Camera Nails Demo Mode”

Sorting Two Tonnes Of Lego

Have you ever taken an interest in something, and then found it’s got a little out of hand as your acquisitions spiral into a tidal wave of bags and boxes? [Jacques Mattheij] found himself in just that position with Lego. His online purchases had run away with him, and he had a garage packed with “two metric tonnes” of the little coloured bricks.

Disposing of Lego is fairly straightforward, there is a lively second-hand market. But to maximise the return it is important to be in control of what you have, to avoid packaging up fake, discoloured, damaged, or dirty parts. This can become a huge job if you do it by hand, so he built a Lego sorting machine to do the job for him.

The machine starts with a hopper for the loose Lego, with a slow belt that tips individual parts down a chute to a faster belt derived from a running trainer. On that they run past a camera whose images are analysed through a neural net, and based on its identification the parts are directed into appropriate bins with carefully timed jets of compressed air.

The result is a surprisingly fast way to sort large amounts of bricks without human intervention. He’s posted some videos, one of which we’ve placed below the break, so you can see for yourselves.

Continue reading “Sorting Two Tonnes Of Lego”

Smartphone Will Destroy You At Air Hockey

Most of us carry a spectacularly powerful computer in our pocket, which we rarely use for much more than web browsing, social media, and maybe the occasional phone call. Our mobile phones are technological miracles, but their potential sometimes seems wasted.

It’s always a pleasure to see something that makes use of a mobile phone to drive some nuts-and-bolts hardware. [Jose Julio]’s project does just that, using the phone as the brains behind a robotic air hockey table.

Readers with long memories will remember previous air hockey tables from [Jose], using 3D printer components controlled by an Arduino Mega with a webcam suspended above the field of play. This version transfers camera, machine vision, and game strategy to an Android app, leaving the Arduino to control the hardware under wireless network command from above.

The result you can see in the video below the break is an extremely fast-paced game, with the robot looking unbeatable. If you want to build your own there are full instructions and code on GitHub, or if you follow the link from the page linked above, he sells the project as a kit.

Continue reading “Smartphone Will Destroy You At Air Hockey”

EmpathyBot recognizing emotion

Raspberry Pi Robot That Reads Your Emotions

It’s getting easier and easier to add machine intelligence to your hacks, even to the point where you sometimes don’t have to install any special software. In this case [Dexter Industries] has added the ability to read human emotions to their EmpathyBot robot by making use of Google Cloud Vision.

Press a button on the robot and it moves forward until it’s a certain distance from an object. It then takes a picture and sends it off to Google Cloud Vision along with a request to do face detection. The response that Google returns is in JSON format and, if it finds a face, includes the likelihood of the face being happy, sad, sorrowful or surprised. The robot parses that response and gives an appropriate canned speech using the text-to-speech software, eSpeak e.g. “You seem happy! Tell me why you are so happy!”.

[Dexter] has made the source code available on github. It’s written in python and is easy to read by anyone with even just a little programming experience. The video after the break gives a number of demonstrations, including some with non-human subjects.

Continue reading “Raspberry Pi Robot That Reads Your Emotions”