Hackaday Prize Entry: Automated Wildlife Recognition

Trail and wildlife cameras are commonly available nowadays, but the Wild Eye project aims to go beyond simply taking digital snapshots of critters. [Brenda Armour] uses a Raspberry Pi to not only take photos of wildlife who wander into the camera’s field of view, but to also automatically identify and categorize the animals seen using a visual recognition API from IBM via the Node-RED infrastructure. The result is a system that captures an image when motion is detected, sends the image to the visual recognition API, and attempts to identify any wildlife based on the returned data.

The visual recognition isn’t flawless, but a recent proof of concept shows promising results with crows, a cat, and a dog having been successfully identified. Perhaps when the project is ready to move deeper into the woods, elements from these solar-powered networked birdhouses (which also use the Raspberry Pi) could help cut some cords.

JeVois Machine Vision Camera Nails Demo Mode

JeVois is a small, open-source, smart machine vision camera that was funded on Kickstarter in early 2017. I backed it because cameras that embed machine vision elements are steadily growing more capable, and JeVois boasts an impressive range of features. It runs embedded Linux and can process video at high frame rates using OpenCV algorithms. It can run standalone, or as a USB camera streaming raw or pre-processed video to a host computer for further action. In either case it can communicate to (and be controlled by) other devices via serial port.

But none of that is what really struck me about the camera when I received my unit. What really stood out was the demo mode. The team behind JeVois nailed an effective demo mode for a complex device. That didn’t happen by accident, and the results are worth sharing.

Continue reading “JeVois Machine Vision Camera Nails Demo Mode”

Detect Cars Running Stop Signs (and Squirrels Running Across the Roof)

There’s a stop sign outside [Devin Gaffney]’s house that, apparently, no one actually stops at. In order to avoid the traffic and delays on a major thoroughfare, cars take the road behind [Devin Gaffney]’s house, but he noticed a lot of cars didn’t bother to stop at the stop sign. He had a Raspberry Pi and a camera, so he set them up to detect the violating cars.

His setup is pretty standard – Raspberry Pi and camera pointed outside at the intersection. He’s running OpenCV and using machine learning to detect the cars and determine if they have run the stop sign or not. His website has some nice charts showing when the violations occurred by hour and by day of the week. Also on the site are links that you can use to help train the system in noticing cars, cars that run the stop sign, determining if there’s enough of the video to determine if there’s a violation, and whether or not there’s a car going the wrong way through the intersection.

This is an interesting use of the Pi and OpenCV; there’s no guarantee that this will help the people of [Devin Gaffney]’s neighborhood, but hopefully gives them some ammunition (assuming they want something done about the intersection.) It’s a cheap and easy setup and it’s nice to let the community have a hand in training the system. For more OpenCV, check out this article on taking the perfect jump shot or this one which tries to quantify cloudiness. Cool stuff.

[via reddit]

Continue reading “Detect Cars Running Stop Signs (and Squirrels Running Across the Roof)”

Ping Pong Ball-Juggling Robot

There aren’t too many sports named for the sound that is produced during the game. Even though it’s properly referred to as “table tennis” by serious practitioners, ping pong is probably the most obvious. To that end, [Nekojiru] built a ping pong ball juggling robot that used those very acoustics to pinpoint the location of the ball in relation to the robot. Not satisfied with his efforts there, he moved onto a visual solution and built a new juggling rig that uses computer vision instead of sound to keep a ping pong ball aloft.

The main controller is a Raspberry Pi 2 with a Pi camera module attached. After some mishaps with the planned IR vision system, [Nekojiru] decided to use green light to illuminate the ball. He notes that OpenCV probably wouldn’t have worked for him because it’s not fast enough for the 90 fps that’s required to bounce the ping pong ball. After looking at the incoming data from this system, an algorithm extracts 3D information about the ball and directs the paddle to strike the ball in a particular way.

If you’ve ever wanted to get into real-time object tracking, this is a great project to look over. The control system is well polished and the robot itself looks almost professionally made. Maybe it’s possible to build something similar to test [Nekojiru]’s hypothesis that OpenCV isn’t fast enough for this. If you want to get started in that realm of object tracking, there are some great projects that make use of that piece of software as well.

Counting Laps and Testing Products with OpenCV

It’s been about a year and a half since the Batteroo, formally known as Batteriser, was announced as a crowdfunding project. The premise is a small sleeve that goes around AA and AAA batteries, boosting the voltage to extract more life out of them. [Dave Jones] at EEVblog was one of many people to question the product, which claimed to boost battery life by 800%.

Batteroo did manage to do something many crowdfunding projects can’t: deliver a product. Now that the sleeves are arriving to backers, people are starting to test them in the wild. In fact, there’s an entire thread of tests happening over on EEVblog.

One test being run is a battery powered train, running around a track until the battery dies completely. [Frank Buss] wanted to run this test, but didn’t want to manually count the laps the train made. He whipped up a script in Python and OpenCV to automate the counting.

The script measures laps by setting two zones on the track. When the train enters the first zone, the counter is armed. When it passes through the second zone, the lap is recorded. Each lap time is kept, ensuring good data for comparing the Batteroo against a normal battery.

The script gives a good example for people wanting to play with computer vision. The source is available on Github. As for the Batteroo, we’ll await further test results before passing judgement, but we’re not holding our breath. After all, the train ran half as long when using a Batteroo.

Simon Says Smile, Human!

The bad news is that when our robot overlords come to oppress us, they’ll be able to tell how well they’re doing just by reading our facial expressions. The good news? Silly computer-vision-enhanced party games!

[Ricardo] wrote up a quickie demonstration, mostly powered by OpenCV and Microsoft’s Emotion API, that scores your ability to mimic emoticon faces. So when you get shown a devil-with-devilish-grin image, you’re supposed to make the same face convincingly enough to fool a neural network classifier. And hilarity ensues!

Continue reading “Simon Says Smile, Human!”

Zeroing CNC Mills With OpenCV

For [Jay] and [Ricardo]’s final project for [Dr. Bruce Land]’s ECE4760 course at Cornell, they tackled a problem that is the bane of all machinists. Their project finds the XY zero of a part in a CNC machine using computer vision, vastly reducing the time it take to set up a workpiece and giving us yet another reason to water down the phrase ‘Internet of Things’ by calling this the Internet of CNC Machines.

For the hardware, [Jay] and [Ricardo] used a PIC32 to interface with an Arducam module, a WiFi module, and an inductive sensor for measuring the distance to the workpiece. All of this was brought together on a PCB specifically designed to be single-sided (smart!), and tucked away in an enclosure that can be easily attached to the spindle of a CNC mill. This contraption looks down on a workpiece and uses OpenCV to find the center of a hole in a fixture. When the center is found, the mill is zeroed on its XY axis.

The software is a bit simpler than a device that has OpenCV processing running on a microcontroller. Detecting the center of the bore, for instance, happens on a laptop running a few Python scripts. The mill attachment communicates with the laptop over WiFi, and sends a few images of the downward-facing camera over to the laptop. From there, the laptop detects the center of the bore in the fixture plate and generates some G-code to send over to the mill.

While the device works remarkably well, and is able to center the mill fairly quickly and without a lot of user intervention, there were a few problems. The camera is not perfectly aligned with the axis of the spindle, making the math harder than it should be. Also, the enclosure isn’t rated for being an environment where coolant is sprayed everywhere. Those are small quibbles, and these problems could be fixed simply by designing and printing another enclosure. The device works, though, and really cuts down on the time it takes to zero out a mill.

You can check out the video description of the build below.

Continue reading “Zeroing CNC Mills With OpenCV”