An Over-engineered LED Sign Board

Never underestimate the ability of makers in over thinking and over-engineering the simplest of problems and demonstrating human ingenuity. The RGB LED sign made by [Hans and team] over at the [Hackheim hackerspace] in Trondheim is a testament to this fact.

As you would expect, the WS2812 RGB LEDs illuminate the sign. In this particular construction, an individual strip is responsible for each character. Powered by an ESP32 running FreeRTOS, the sign communicates using MQTT and each letter gets a copy of the 6 x 20 framebuffer which represents the color pattern that is expected to be displayed. A task on the ESP32 calculates the color value to be displayed by each LED.

The real question is, how to calibrate the distributed strings of LEDs such that LEDs on adjacent letters of the sign display an extrapolated value? The answer is to use OpenCV to create a map of the LEDs from their two-dimensional layout to a lookup table. The Python script sends a command to illuminate a single LED and the captured image with OpenCV records the position of the signal. This is repeated for all LEDs to generate a map that is used in the ESP32 firmware. How cool is that?

And if you are wondering about the code, it is up on [Github], and we would love to see someone take this up a level. The calibration code, as well as the Remote Client and ESP32 codes, are all there for your hacking pleasure.

Its been a while since we have seen OpenCV in action like with the Motion Tracking Turret and Face Recognition. The possibilities seem endless. Continue reading “An Over-engineered LED Sign Board”

Robot Sorts Beads By Color

If you know anyone who does crafts, they probably have a drawer with a  few million beads loose and mixed together. You’ll sort them out one day, right? Probably not. Unless, of course, you build a robot to do the dirty work for you. That’s what [Kalfalfa] did, using some Phidgets boards, a camera and Open CV. You can see a video of the cardboard machine doing its thing below.

Maybe it is because we are more electronics-minded, but we were impressed with the mechanism to grab just one bead at a time from the hopper. If you watch the video, you’ll see what we mean. However, sometimes a bead jams and a magnetic sensor figures that out so the controller can reverse a bit and try again.

Continue reading “Robot Sorts Beads By Color”

Real Or Fake? Robot Uses AI To Find Waldo

The last few weeks have seen a number of tech sites reporting on a robot which can find and point out Waldo in those “Where’s Waldo” books. Designed and built by Redpepper, an ad agency. The robot arm is a UARM Metal, with a Raspberry Pi controlling the show.

A Logitech c525 webcam captures images, which are processed by the Pi with OpenCV, then sent to Google’s cloud-based AutoML Vision service. AutoML is trained with numerous images of Waldo, which are used to attempt a pattern match.  If a pattern is found, the coordinates are fed to PYUARM, and the UARM will literally point Waldo out.

While this is a totally plausible project, we have to admit a few things caught our jaundiced eye. The Logitech c525 has a field of view (FOV) of 69°. While we don’t have dimensions of the UARM Metal, it looks like the camera is less than a foot in the air. Amazon states that “Where’s Waldo Delux Edition” is 10″ x 0.2″ x 12.5″ inches. That means the open book will be 10″ x 25″. The robot is going to have a hard time imaging a surface that large in a single image. What’s more, the c525 is a 720p camera, so there isn’t a whole lot of pixel density to pattern match. Finally, there’s the rubber hand the robot uses to point out Waldo. Wouldn’t that hand block at least some of the camera’s view to the left?

We’re not going to jump out and call this one fake just yet — it is entirely possible that the robot took a mosaic of images and used that to pattern match. Redpepper may have used a bit of movie magic to make the process more interesting. What do you think? Let us know down in the comments!

Replace Your Calipers With A Microscope And Image Analysis

Getting a good measurement is a matter of using the right tool for the job. A tape measure and a caliper are both useful tools, but they’re hardly interchangeable for every task. Some jobs call for a hands-off, indirect way to measure small distances, which is where this image analysis measuring technique can come in handy.

Although it appears [Saulius Lukse] purpose-built this rig, which consists of a microscopic lens on a digital camera mounted to the Z-axis of a small CNC machine, we suspect that anything capable of accurately and smoothly transitioning a camera vertically could be used. The idea is simple: the height of the camera over the object to be measured is increased in fine increments, with an image acquired in OpenCV at each stop. A Laplace transformation is performed to assess the sharpness of each image, which when plotted against the frame number shows peaks where the image is most in focus. If you know the distance the lens traveled between peaks, you can estimate the height of the object. [Salius] measured a coin using this technique and it was spot on compared to a caliper. We could see this method being useful for getting an accurate vertical profile of a more complex object.

From home-brew lidar to detecting lightning in video, [Saulius] has an interesting skill set at the intersection of optics and electronics. We’re looking forward to what he comes up with next.

Alexa, Attack Intruders

If our doom at the hands of our robot overlords is coming, I for one welcome the chance to get a preview of how they might go about it. That’s the idea behind Project Icarus, an Alexa-enabled face-tracking Nerf turret. Designed by [Nick Engmann],  this impressive (or terrifying) project is built around a Nerf Vulcan, a foam dart firing machine gun mounted on a panning turret that is hidden behind a drop-down cabinet door. This is connected to a Pi Zero equipped with a Pi camera. The Zero is running OpenCV and Google Firebase, which connects it with Amazon’s Alexa service.

It works like this: you say “Alexa, open Project Icarus”. Through the Alexa skill that [Nick] created, this connects to the Pi and starts the system. If you then say “Alexa, activate alpha”, it triggers a relay to open the cabinet and the Nerf gun starts panning around, while the camera mounted on the top of it searches for faces. The command “Alexa, activate beta” triggers the Nerf to open fire.

Continue reading “Alexa, Attack Intruders”

Training The Squirrel Terminator

Depending on which hemisphere of the Earth you’re currently reading this from, summer is finally starting to fight its way to the surface. For the more “green” of our readers, that can mean it’s time to start making plans for summer gardening. But as anyone who’s ever planted something edible can tell you, garden pests such as squirrels are fantastically effective at turning all your hard work into a wasteland. Finding ways to keep them away from your crops can be a full-time job, but luckily it’s a job nobody will mind if automation steals from humans.

Kitty gets a pass

[Peter Quinn] writes in to tell us about the elaborate lengths he is going to keep bushy-tailed marauders away from his tomatoes this year. Long term he plans on setting up a non-lethal sentry gun to scare them away, but before he can get to that point he needs to perfect the science of automatically targeting his prey. At the same time, he wants to train the system well enough that it won’t fire on humans or other animals such as cats and birds which might visit his garden.

A Raspberry Pi 3 with a cheap webcam is used to surveil the garden and detect motion. When frames containing motion are detected, they are forwarded to a laptop which has enough horsepower to handle the squirrel detection through Darknet YOLO. [Peter] recognizes this isn’t an ideal architecture for real-time targeting of a sentry turret, but it’s good enough for training the system.

Which incidentally is what [Peter] spends the most time explaining on the project’s Hackaday.io page. From the saga of getting the software environment up and running to determining how many pictures of squirrels in his yard he should provide the software for training, it’s an excellent case study in rolling your own image recognition system. After approximately 18 hours of training, he now has a system which is able to pick squirrels out from the foliage. The next step is hooking up the turret.

We’ve covered other automated turrets here on Hackaday, and we’ve seen automated devices for terrifying squirrels before, but this is the first time we’ve seen the concepts mixed.

Bring Deep Learning Algorithms To Your Security Cameras

AI is quickly revolutionizing the security camera industry. Several manufacturers sell cameras which use deep learning to detect cars, people, and other events. These smart cameras are generally expensive though, compared to their “dumb” counterparts. [Martin] was able to bring these detection features to a standard camera with a Raspberry Pi, and a bit of ingenuity.

[Martin’s] goal was to capture events of interest, such as a person on screen, or a car in the driveway. The data for the events would then be published to an MQTT topic, along with some metadata such as confidence level. OpenCV is generally how these pipelines start, but [Martin’s] camera wouldn’t send RTSP images over TCP the way OpenCV requires, only RTSP over UDP. To solve this, Martin captures the video stream with FFmpeg. The deep learning AI magic is handled by the darkflow library, which is itself based upon Google’s Tensorflow.

Martin tested out his recognition system with some cheap Chinese PTZ cameras, and the processing running on a remote Raspberry Pi. So far the results have been good. The system is able to recognize people, animals, and cars pulling in the driveway.  His code is available on GitHub if you want to give it a spin yourself!