Camera Sees Electromagnetic Interference Using An SDR And Machine Vision

It’s one thing to know that your device is leaking electromagnetic interference (EMI), but if you really want to solve the problem, it might be helpful to know where the emissions are coming from. This heat-mapping EMI probe will answer that question, with style. It uses a webcam to record an EMI probe and the overlay a heat map of the interference on the image itself.

Regular readers will note that the hardware end of [Charles Grassin]’s EMI mapper bears a strong resemblance to the EMC probe made from semi-rigid coax we featured recently. Built as a cheap DIY substitute for an expensive off-the-shelf probe set for electromagnetic testing, the probe was super simple: just a semi-rigid coax jumper with one SMA plug lopped off and the raw end looped back and soldered. Connected to an SDR dongle, the probe proved useful for tracking down noisy circuits.

[Charles]’ project takes that a step further by adding a camera that looks down upon the device under test. OpenCV is used to track the probe, which is moved over the DUT manually with the help of an augmented reality display that helps track coverage, with a Python script recording its position and the RF power measurements. The video below shows the capture process and what the data looks like when reassembled as an overlay on top of the device.

Even if EMC testing isn’t your thing, this one seems like a lot of fun for the curious. [Charles] has kindly made the sources available on GitHub, so this is a great project to just knock out quickly and start mapping.

Continue reading “Camera Sees Electromagnetic Interference Using An SDR And Machine Vision”

Leigh Johnson’s Guide To Machine Vision On Raspberry Pi

We salute hackers who make technology useful for people in emerging markets. Leigh Johnson joined that select group when she accepted the challenge to build portable machine vision units that work offline and can be deployed for under $100 each. For hardware, a Raspberry Pi with camera plus screen can fit under that cost ceiling, and the software to give it sight is the focus of her 2018 Hackaday Superconference presentation. (Video also embedded below.)

The talk is a very concise 13 minutes, so Leigh flies through definitions of basic terms, before quickly naming TensorFlow and Keras as the tools she used. The time she saved here was spent on explaining what convolutional neural networks are and how they work, just enough to prepare the audience. But all of that is really just background, the meat of the talk is self-contained examples that Leigh has put together and made available online. I love to see that since it means you go beyond just watching and try it out for yourself. Continue reading “Leigh Johnson’s Guide To Machine Vision On Raspberry Pi”

Robot Solves Rubik’s Cube With One Hand Tied Behind Its Back

For all those who have complained about Rubik’s Cube solving robots in the past by dismissing purpose-built rigs that hold the cube in a non-anthropomorphic manner: checkmate.

The video below shows not only that a robot can solve the classic puzzle with mechanical hands, but it can also do it with just one of them – and that with only three fingers. The [Yamakawa] lab at the University of Tokyo built the high-speed manipulator to explore the kinds of fine motions that humans perform without even thinking about them. Their hand, guided by a 500-fps machine vision system, uses two opposing fingers to grip the lower part of the cube while using the other finger to flick the top face of the cube counterclockwise. The entire cube can also be rotated on the vertical axis, or flipped 90° at a time. Piecing these moves together lets the hand solve the cube with impressive speed; extra points for the little, “How’s that, human?” flick at the end.

It might not be the fastest cube solver, or one that’s built right into the cube itself, but there’s something about the dexterity of this hand that we really appreciate.

Continue reading “Robot Solves Rubik’s Cube With One Hand Tied Behind Its Back”

Real Or Fake? Robot Uses AI To Find Waldo

The last few weeks have seen a number of tech sites reporting on a robot which can find and point out Waldo in those “Where’s Waldo” books. Designed and built by Redpepper, an ad agency. The robot arm is a UARM Metal, with a Raspberry Pi controlling the show.

A Logitech c525 webcam captures images, which are processed by the Pi with OpenCV, then sent to Google’s cloud-based AutoML Vision service. AutoML is trained with numerous images of Waldo, which are used to attempt a pattern match.  If a pattern is found, the coordinates are fed to PYUARM, and the UARM will literally point Waldo out.

While this is a totally plausible project, we have to admit a few things caught our jaundiced eye. The Logitech c525 has a field of view (FOV) of 69°. While we don’t have dimensions of the UARM Metal, it looks like the camera is less than a foot in the air. Amazon states that “Where’s Waldo Delux Edition” is 10″ x 0.2″ x 12.5″ inches. That means the open book will be 10″ x 25″. The robot is going to have a hard time imaging a surface that large in a single image. What’s more, the c525 is a 720p camera, so there isn’t a whole lot of pixel density to pattern match. Finally, there’s the rubber hand the robot uses to point out Waldo. Wouldn’t that hand block at least some of the camera’s view to the left?

We’re not going to jump out and call this one fake just yet — it is entirely possible that the robot took a mosaic of images and used that to pattern match. Redpepper may have used a bit of movie magic to make the process more interesting. What do you think? Let us know down in the comments!

Bringing Augmented Reality To The Workbench

[Ted Yapo] has big ideas for using Augmented Reality as a tool to enhance an electronics workbench. His concept uses a camera and projector system working together to detect objects on a workbench, and project information onto and around them. [Ted] envisions virtual displays from DMMs, oscilloscopes, logic analyzers, and other instruments projected onto a convenient place on the actual work area, removing the need to glance back and forth between tools and the instrument display. That’s only the beginning, however. A good camera and projector system could read barcodes on component bags to track inventory, guide manual PCB assembly by projecting which components go where, display reference data, and more.

An open-sourced, accessible machine vision system working in tandem with a projector would open a lot of doors. Fortunately [Ted] has prior experience in this area, having previously written the computer vision code for room-scale dynamic projection environments. That’s solid experience that he can apply to designing a workbench-scale system as his entry for The Hackaday Prize.

Rubik’s Robot So Fast It Looks Like A Glitch In The Matrix

From Ferraris to F-16s, some things just look fast. This Rubik’s Cube solving robot not only looks fast, it is fast: it solved a standard cube in 380 milliseconds. Blink during the video below and you’ll miss it — even on the high-speed we had trouble keeping track of the number of moves this solution took. It looked like about 20.

Beating the previous robot record of 637 milliseconds is just the icing on the cake of a very cool build undertaken by [Ben Katz]. He and his collaborator [Jared] put together a robot with a decidedly industrial look — aluminum extrusion chassis, six pancake servo motors with high-precision optical encoders, and polycarbonate panels for explosion containment which proved handy during development. The motors had to be modified to allow the encoders to be attached to the rear, and custom motor controllers were fabricated. [Jared] came up with a unique board to synchronize the six motors and prevent collisions between faces. Machine vision is provided by just two PlayStation Eye cameras; mounted at opposite corners of the enclosure, each camera can see three faces at a time. They had a little trouble distinguishing the red from the orange, which was solved with a Sharpie.

[Ben] and [Jared] think they can shave a few milliseconds here and there with tweaks, but even as it is, this is a great lesson in optimization and integration. We’ve covered Rubik’s robots before, like this two-motor slow and steady design and this six-motor build that solves a cube in less than a second.

Continue reading “Rubik’s Robot So Fast It Looks Like A Glitch In The Matrix”

JeVois Machine Vision Camera Nails Demo Mode

JeVois is a small, open-source, smart machine vision camera that was funded on Kickstarter in early 2017. I backed it because cameras that embed machine vision elements are steadily growing more capable, and JeVois boasts an impressive range of features. It runs embedded Linux and can process video at high frame rates using OpenCV algorithms. It can run standalone, or as a USB camera streaming raw or pre-processed video to a host computer for further action. In either case it can communicate to (and be controlled by) other devices via serial port.

But none of that is what really struck me about the camera when I received my unit. What really stood out was the demo mode. The team behind JeVois nailed an effective demo mode for a complex device. That didn’t happen by accident, and the results are worth sharing.

Continue reading “JeVois Machine Vision Camera Nails Demo Mode”