Point Out Pup’s Packages With This Poop-Shooting Laser

When you’re lucky enough to have a dog in your life, you tend to overlook some of the more one-sided aspects of the relationship. While you are severely restrained with regard to where you eliminate your waste, your furry friend is free to roam the yard and dispense his or her nuggets pretty much at will, and fully expect you to follow along on cleanup duty. See what we did there?

And so dog people sometimes rebel at this lopsided power structure, by leaving the cleanup till later — often much, much later, when locating the offending piles can be a bit difficult. So naturally, we now have this poop-shooting laser turret to helpfully guide you through your backyard cleanup sessions. It comes to us from [Caleb Olson], who leveraged his recent poop-posture monitor as the source of data for where exactly in the yard each deposit is located. To point them out, he attached a laser pointer to a cheap robot arm, and used OpenCV to help line up the bright green spot on each poop.

But wait, there’s more. [Caleb]’s code also optimizes his poop patrol route, minimizing the amount of pesky walking he has to do to visit each pile. And, the same pose estimation algorithm that watches the adorable [Twinkie] make her deposits keeps track of which ones [Caleb] stoops by, removing each from the worklist in turn. So now instead of having a dog control his life, he’s got a dog and a computer running the show. Perfect.

We joke, because poop, but really, this is a pretty neat exercise in machine learning. It does seem like the robot arm was bit overkill, though — we’d have thought a simple two-servo turret would have been pretty easy to whip up.

Continue reading “Point Out Pup’s Packages With This Poop-Shooting Laser”

OpenCV Running On A Tiny Microcontroller

At first blush, it might seem like projects that make extensive use of computer vision or machine learning would need to be based on powerful computing platforms with plenty of clock cycles and memory to handle this type of application. While there is some truth to this, as the field progresses it becomes possible to experiment with these tools on low-power devices as well. Take this OpenCV project which is built entirely on an ESP32 for example.

With that being said, there are some modifications that need to be made to the ESP32 in order to use OpenCV in any meaningful way. The most important of these is the use of the ESP32-DOWDQ6 module which increases the available memory of the ESP32 to allow it to make better use of camera functions. Even then, the ESP32 can’t run the entire OpenCV application, so a shrunken version of OpenCV is required before the device can run it natively. Once those two obstacles are out of the way, though, doing things like edge detection, as this project demonstrates, are well in the realm of possibility.

If running OpenCV on something as small as an ESP32 is possible, it is even easier to run on something orders of magnitude more powerful and yet still inexpensive, such as the Raspberry Pi. While the project’s code is available on its GitHub page for those interested, there are plenty of other OpenCV projects that we have featured on more powerful platforms as well, like this clock which falls off of the wall whenever someone looks at it.

Continue reading “OpenCV Running On A Tiny Microcontroller”

Engineering Vs Pigeons

We’ve all been there. Pigeons are generally pretty innocuous, but they do leave a mess. If you have a convertible or a bicycle or even just a clean car, you probably don’t want them hanging around. [Max] was tired of a messy balcony, so like you might approach any engineering problem, he worked his way through several possible solutions. Starting with plastic crows, and naturally ending with an automated water gun.

The resulting robotic water gun that targets pigeons with openCV is a dandy project and while we don’t usually advocate shooting at neighborhood animals, we don’t think a little water will be any worse than the rain for the pigeons. The build started with a cheap electric water pistol. A Wemos D1 Mini ESP8266 development board provides the brainpower. The water pistol wouldn’t easily take rechargeable batteries, plus it is a good idea to separate the logic supply and the pump motors, so the D1 gets power from a USB power bank separate from the gun’s batteries.

That leaves the camera. An old iPhone 6S with a 3D printed bracket feeds video to a Python script that uses openCV. If looks for changes using a very particular algorithm to detect that something is moving and fires the gun. It doesn’t appear that it actually tracks the pigeons, so maybe that’s a thought for version 2.

Was it successful? Maybe, but it does seem like the pigeons learned to avoid it. We still think azimuth and elevation on the gun would help.

Most of the time when we see pigeon hacking it is to use them for nefarious purposes. [Max] should be glad he doesn’t have to deal with lions.

Learn Sign Language Using Machine Vision

Learning a new language is a great way to exercise the mind and learn about different cultures, and it’s great to have a native speaker around to improve the learning experience. Without one it’s still possible to learn via videos, books, and software though. The task does get much more complicated when trying to learn a language that isn’t spoken, though, like American Sign Language. This project allows users to learn the ASL alphabet with the help of computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 which is trained for each sign in the ASL alphabet. A sign is shown to the user on a screen, and the user needs to demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze the frames of the user in real-time. The user is shown pictures of the correct sign, and is rewarded when the correct sign is made.

While this only works for alphabet signs in ASL currently, the team at the University of Glasgow that built this project is planning on expanding it to include other signs as well. We have seen other machines built to teach ASL in the past, like this one which relies on a specialized glove rather than computer vision.

Continue reading “Learn Sign Language Using Machine Vision”

Twitch And Blink Your Way Through Typing With This Facial Keyboard

For those that haven’t experienced it, the early days of parenthood are challenging, to say the least. Trying to get anything accomplished with a raging case of sleep deprivation is hard enough, but the little bundle of joy who always seems to need to be in physical contact with you makes doing things with your hands nigh impossible. What’s the new parent to do when it comes time to be gainfully employed?

Finding himself in such a boat, [Fletcher]’s solution was to build a face-activated keyboard to work around his offspring’s needs. Before you ask: no, voice recognition software wouldn’t work, at least according to the sleepy little boss who protests noisy awakenings. The solution instead was to first try OpenCV and the dlib facial recognition library to watch [Fletcher] blinking out Morse code. While that sorta-kinda worked, one’s blinkers can’t long endure such a workout, so he moved on to an easier set of gestures. Mouthing Morse code covers most of the keyboard, while a combination of eye, eyebrow, and other facial twitches and tics cover the rest, with MediaPipe’s Face Mesh doing the heavy-lifting in terms of landmark detection.

The resulting facial keyboard, aptly dubbed “CheekyKeys,” performed well enough for [Fletcher] to use for a skills test during an interview with a Big Tech Company. Imagining the interviewer on the other end watching him convulse his way through the interview was worth the price of admission, and we don’t even care if it was a put-on. Video after the break.

CheekyKeys is pretty cool, doing something with a webcam and Python that we thought would have needed a dedicated AI depth camera to accomplish. But perhaps the real hack here was how [Fletcher] taught himself Morse in fifteen minutes.

Continue reading “Twitch And Blink Your Way Through Typing With This Facial Keyboard”

CNC Toolpath Visualisation With OpenCV

[Tony Liechty] has been having a few issues getting into CNC machining — starting with a simple router, he’s tripped over the usual beginners’ problems, you know, things like alignment of the design to the workpiece shape, axis clipping and workpiece/clamp collisions. He did the decent hacker thing, and turned to some other technologies to help out, and came up with a rather neat way of using machine vision with OpenCV to help preview the toolpath against an image of the workpiece in-situ (video, embedded below.)

ChArUco (a combined chessboard and ArUco marker pattern) boards taped to the machine rails were used to give OpenCV a reference of where points in space are with respect to the pattern field, enabling identification of pixel locations within the image of the rails. A homography transformation is then used to link the two side references to an image of the workpiece. This transformation allows the system to determine the physical location of any pixel from the workpiece image, which can then be overlaid with an image of the desired toolpath. Feedback from the user would then enable adjustment of the path, such as shifts, or rotates to be effected in order to counter any issue that can be seen. The reduction of ‘silly’ clamping, positioning and other such issues, means less time wasted and fewer materials in scrap bin, and that can only be a good thing.

[Tony] says this code and setup is just a demo of the concept, but such ‘rough’ code could well be the start of something great, we shall see. Checkout the realWorldGcodeSender GitHub if you want to play along at home!

We’ve seen a few uses of OpenCV for assisting with CNC applications, like this cool you draw it, i’ll cut it hack, and this method for using machine vision to zero-in a CNC mill onto the centre of a large hole.

Continue reading “CNC Toolpath Visualisation With OpenCV”

You Draw It, CNC Cuts It

[Jamie] aka [vector76] hit us with a line-tracing plugin for OctoPrint that cuts out whatever 2D shape you draw on a piece of wood. The plugin lets you skip the modeling step entirely, going straight from a CNC-mounted webcam that reads your scribbles and gives you a Gcode toolpath in return. The code is on GitHub and there’s a demo video embedded below.

Under the hood, OpenCV is doing a lot of the image processing, including line detection, and the iterative “find the line” and “move the toolhead” steps really show off what computer vision can do. It starts off with a fiducial arrow for scale and orientation, then it mores the webcam around the scene. The user can enter the usual milling parameters: speeds, feeds, depth of cut, tool offset, milling direction, etc. And then it gets to work.

Right now, it’s limited to paths with non-crossing lines, and probably with good contrast and a nice dark line — all the usual CV restrictions. But mounting a webcam to a CNC toolhead and using it for various pathing problems really opens up tons of possibilities: visual homing, workpiece edge finding, copying parts, custom fitting odd shapes, and more. This project is clearly an invitation to keep on hacking, an appetizer. Once you see the girl pirate robot that [Jamie]’s daughter made, you’ll get the idea.

We’ve seen a similar OpenCV approach used for center-finding bore holes, but while we’ve seen a few webcams used with laser cutters, the CNC mill applications seem largely untapped. Let us know in the comments if you’ve got some other good examples.

Continue reading “You Draw It, CNC Cuts It”