Detect Cars Running Stop Signs (and Squirrels Running Across The Roof)

There’s a stop sign outside [Devin Gaffney]’s house that, apparently, no one actually stops at. In order to avoid the traffic and delays on a major thoroughfare, cars take the road behind [Devin Gaffney]’s house, but he noticed a lot of cars didn’t bother to stop at the stop sign. He had a Raspberry Pi and a camera, so he set them up to detect the violating cars.

His setup is pretty standard – Raspberry Pi and camera pointed outside at the intersection. He’s running OpenCV and using machine learning to detect the cars and determine if they have run the stop sign or not. His website has some nice charts showing when the violations occurred by hour and by day of the week. Also on the site are links that you can use to help train the system in noticing cars, cars that run the stop sign, determining if there’s enough of the video to determine if there’s a violation, and whether or not there’s a car going the wrong way through the intersection.

This is an interesting use of the Pi and OpenCV; there’s no guarantee that this will help the people of [Devin Gaffney]’s neighborhood, but hopefully gives them some ammunition (assuming they want something done about the intersection.) It’s a cheap and easy setup and it’s nice to let the community have a hand in training the system. For more OpenCV, check out this article on taking the perfect jump shot or this one which tries to quantify cloudiness. Cool stuff.

[via reddit]

Continue reading “Detect Cars Running Stop Signs (and Squirrels Running Across The Roof)”

Ping Pong Ball-Juggling Robot

There aren’t too many sports named for the sound that is produced during the game. Even though it’s properly referred to as “table tennis” by serious practitioners, ping pong is probably the most obvious. To that end, [Nekojiru] built a ping pong ball juggling robot that used those very acoustics to pinpoint the location of the ball in relation to the robot. Not satisfied with his efforts there, he moved onto a visual solution and built a new juggling rig that uses computer vision instead of sound to keep a ping pong ball aloft.

The main controller is a Raspberry Pi 2 with a Pi camera module attached. After some mishaps with the planned IR vision system, [Nekojiru] decided to use green light to illuminate the ball. He notes that OpenCV probably wouldn’t have worked for him because it’s not fast enough for the 90 fps that’s required to bounce the ping pong ball. After looking at the incoming data from this system, an algorithm extracts 3D information about the ball and directs the paddle to strike the ball in a particular way.

If you’ve ever wanted to get into real-time object tracking, this is a great project to look over. The control system is well polished and the robot itself looks almost professionally made. Maybe it’s possible to build something similar to test [Nekojiru]’s hypothesis that OpenCV isn’t fast enough for this. If you want to get started in that realm of object tracking, there are some great projects that make use of that piece of software as well.

Counting Laps And Testing Products With OpenCV

It’s been about a year and a half since the Batteroo, formally known as Batteriser, was announced as a crowdfunding project. The premise is a small sleeve that goes around AA and AAA batteries, boosting the voltage to extract more life out of them. [Dave Jones] at EEVblog was one of many people to question the product, which claimed to boost battery life by 800%.

Batteroo did manage to do something many crowdfunding projects can’t: deliver a product. Now that the sleeves are arriving to backers, people are starting to test them in the wild. In fact, there’s an entire thread of tests happening over on EEVblog.

One test being run is a battery powered train, running around a track until the battery dies completely. [Frank Buss] wanted to run this test, but didn’t want to manually count the laps the train made. He whipped up a script in Python and OpenCV to automate the counting.

The script measures laps by setting two zones on the track. When the train enters the first zone, the counter is armed. When it passes through the second zone, the lap is recorded. Each lap time is kept, ensuring good data for comparing the Batteroo against a normal battery.

The script gives a good example for people wanting to play with computer vision. The source is available on Github. As for the Batteroo, we’ll await further test results before passing judgement, but we’re not holding our breath. After all, the train ran half as long when using a Batteroo.

Simon Says Smile, Human!

The bad news is that when our robot overlords come to oppress us, they’ll be able to tell how well they’re doing just by reading our facial expressions. The good news? Silly computer-vision-enhanced party games!

[Ricardo] wrote up a quickie demonstration, mostly powered by OpenCV and Microsoft’s Emotion API, that scores your ability to mimic emoticon faces. So when you get shown a devil-with-devilish-grin image, you’re supposed to make the same face convincingly enough to fool a neural network classifier. And hilarity ensues!

Continue reading “Simon Says Smile, Human!”

Zeroing CNC Mills With OpenCV

For [Jay] and [Ricardo]’s final project for [Dr. Bruce Land]’s ECE4760 course at Cornell, they tackled a problem that is the bane of all machinists. Their project finds the XY zero of a part in a CNC machine using computer vision, vastly reducing the time it take to set up a workpiece and giving us yet another reason to water down the phrase ‘Internet of Things’ by calling this the Internet of CNC Machines.

For the hardware, [Jay] and [Ricardo] used a PIC32 to interface with an Arducam module, a WiFi module, and an inductive sensor for measuring the distance to the workpiece. All of this was brought together on a PCB specifically designed to be single-sided (smart!), and tucked away in an enclosure that can be easily attached to the spindle of a CNC mill. This contraption looks down on a workpiece and uses OpenCV to find the center of a hole in a fixture. When the center is found, the mill is zeroed on its XY axis.

The software is a bit simpler than a device that has OpenCV processing running on a microcontroller. Detecting the center of the bore, for instance, happens on a laptop running a few Python scripts. The mill attachment communicates with the laptop over WiFi, and sends a few images of the downward-facing camera over to the laptop. From there, the laptop detects the center of the bore in the fixture plate and generates some G-code to send over to the mill.

While the device works remarkably well, and is able to center the mill fairly quickly and without a lot of user intervention, there were a few problems. The camera is not perfectly aligned with the axis of the spindle, making the math harder than it should be. Also, the enclosure isn’t rated for being an environment where coolant is sprayed everywhere. Those are small quibbles, and these problems could be fixed simply by designing and printing another enclosure. The device works, though, and really cuts down on the time it takes to zero out a mill.

You can check out the video description of the build below.

Continue reading “Zeroing CNC Mills With OpenCV”

Use Machine Learning To Identify Superheroes And Other Miscellany

[Massimiliano Patacchiola] writes this handy guide on using a histogram intersection algorithm to identify different objects. In this case, lego superheroes. All you need to follow along are eyes, Python, a computer, and a bit of machine learning magic.

He gives a good introduction to the idea. You take a histogram of the colors in a properly cropped and filtered photo of the object you want to identify. You then feed that into a neural network and train it to identify the different superheroes by color. When you feed it a new image later, it will compare the new image’s histogram to its model and output confidences as to which set it belongs.

This is a useful thing to know. While a lot of vision algorithms try to make geometric assertions about the things they see, adding color to the mix can certainly help your friendly robot project recognize friend from foe.

 

Running Intel TBB On A Raspberry Pi

The usefulness of Raspberry Pis seems almost limitless, with new applications being introduced daily and with no end in sight. But, as versatile as they are, it’s no secret that Raspberry Pis are still lacking in pure processing power. So, some serious optimization is needed to squeeze as much power out of the Raspberry Pi as possible when you’re working on processor-intensive projects.

This simplest way to accomplish this optimization, of course, is to simply reduce what’s running down to the essentials. For example, there’s no sense in running a GUI if your project doesn’t even use a display. Another strategy, however, is to ensure that you’re actually using all of the available processing power that the Raspberry Pi offers. In [sagiz’s] case, that meant using Intel’s open source Threading Building Blocks to achieve better parallelism in his OpenCV project.

Continue reading “Running Intel TBB On A Raspberry Pi”