Protecting Your Home Against Potato Invaders

Not sure where the potatoes were sneaking in, [24Gospel] did what any decent hacker would do: strapped a camera to a Raspberry Pi, hacked a bit on OpenCV, and built himself a potato detection system. Now those pesky Russets can’t get into the house without tripping the tuber alarm.

oku0kbr

OK, seriously. [24Gospel] works for a potato farm as a systems/software developer. (How big does a potato farm have to be to require a dedicated software guy?) His system is still a first step, but the goal is to grade the potatoes, record data about size and defects, and even tell different potato types apart. And he’s found decent success so far, especially for the money. We don’t often build projects that need to operate in hostile environments, but we appreciate the nice plastic case and rugged adjustable steel frame that supports the Pi and camera over the sorting bed.

Even more, we applaud the hacker spirit here. [24Gospel] is obviously working in a serious production environment, but still he’s trying out new things in an attempt to make it work better. While it would be impossible to quantify the impact of this kind of on-the-job ingenuity, we bet it’s not insignificant. Why don’t we see more documented workplace hacks around here? Would the unsung heroes please stand up?

[via /r/raspberry_pi]

Heatmap of vacuum cleaning robot

A Glimpse Into The Mind Of A Robot Vacuum Cleaner

What’s going through the mind of those your autonomous vacuum cleaning robots as they traverse a room? There are different ways to find out such as covering the floor with dirt and seeing what remains afterwards (a less desirable approach) or mounting an LED to the top and taking a long exposure photo. [Saulius] decided to do it by videoing his robot with a fisheye lens from near the ceiling and then making a heatmap of the result. Not being satisfied with just a finished photo, he made a video showing the path taken as the room is being traversed, giving us a glimpse of the algorithm itself.

Looking down on the room and robot
Looking down on the room and robot

The robot he used was the Vorwerk VR200 which he’d borrowed for testing. In preparation he cleared the room and strategically placed a few obstacles, some of which he knew the robot wouldn’t get between. He started the camera and let the robot do its thing. The resulting video file was then loaded into some quickly written Python code that uses the OpenCV library to do background subtraction, normalizing, grayscaling, and then heatmapping. The individual frames were then rendered into an animated gif and the video which you can see below.

Continue reading “A Glimpse Into The Mind Of A Robot Vacuum Cleaner”

Hackaday Prize Entry: AutoFan Saves Tired Drivers With Face Recognition

Long distance driving can be tedious at times. The glare of the sun and the greenhouse effect of all your car’s windows make it hot and dry. You turn on the fan, or air conditioning if you have it, and that brings relief. Soon enough you’ve got another problem, the cold dry air is uncomfortable on your eyes. Eventually as you become more tired, you find yourself needing the air on your face more and more as you stay alert. You thus spend most of the journey fiddling with your vents or adjusting the climate controls. Wouldn’t it be great if the car could do all that for you?

AutoFan is a project from [hanno] that aims to automate this process intelligently. It has a fan with steerable louvres, driven by a Raspberry Pi 2 with attached webcam. The Pi computes the position of the driver’s face, and ensures the air from the fan is directed to one side of it. If it sees the driver’s blink rate increasing it directs the air to their face, having detected that they are becoming tired.

The build logs go into detail on the mathematics of calculating servo angles and correcting for camera lens distortion in OpenCV. They also discuss the Python code used to take advantage of the multicore architecture, and to control the servos. The prototype fan housing can be seen in the video below the break, complete with an unimpressed-looking cat. For those of you interested in the code, he has made it available in a GitHub repository.

Continue reading “Hackaday Prize Entry: AutoFan Saves Tired Drivers With Face Recognition”

Chair Dances Like No One Is Watching

Although it might be more accurate to say that this chair dances because no one is watching, the result is still a clever project that [Igor], a maker-in-residence at the National Museum of Decorative Arts and Design in Norway, created recently. Blurring the lines between art, hack, and the ghosts from Super Mario, this chair uses an impressive array of features to “dance”, but only if no one is looking at it.

In order to get the chair to appear to dance, [Igor] added servo motors in all four legs to allow them to bend. A small non-moving dowel was placed on the inside of the leg to keep the chair from falling over during all of the action. It’s small enough that it’s not immediately noticeable from a distance, which helps maintain the illusion of a dancing chair.

From there, a Raspberry Pi 3 serves as the control center for the chair. It’s programmed in Python and runs OpenCV for face detection and uses pigpio for controlling the leg servos. There’s also a web interface for watching the camera’s output and viewing its facial recognition abilities. The web interface also allows a user to debug the program. [Igor]’s chair can process up to 3 frames per second at 800×600 pixels.

Be sure to check out the video after the break to see the chair in action. It’s an interesting piece of art, and if those dowels can support the weight of a person it would be a great addition to any home as well. If it’s not enough chair for you, though, there are some other more dangerous options out there.

Continue reading “Chair Dances Like No One Is Watching”

Abusing A Cellphone Screen With Solenoids Posts High Score

This Raspberry Pi 2 with computer vision and two solenoid “fingers” was getting absurdly high scores on a mobile game as of late 2015, but only recently has [Kristian] finished fleshing the project out with detailed documentation.

Developed for a course in image analysis and computer vision, this project wasn’t really about cheating at a mobile game. It wasn’t even about a robotic interface to a smartphone screen; it was a platform for developing and demonstrating the image analysis theory he was learning, and the computer vision portion is no hack job. OpenCV was used as a foundation for accessing the camera, but none of the built-in filters are used. All of the image analysis is implemented from scratch.

The game is a simple. Humans and zombies move downward in two columns. Zombies (green) should get a screen tap but not humans. The Raspberry Pi camera takes pictures of the smartphone’s screen, to which a HSV filter is applied to filter out everything except green objects (zombies). That alone would be enough to get you some basic results, but not nearly good enough to be truly reliable and repeatable. Therefore, after picking out the green objects comes a whole chain of additional filtering. The details of that are covered on [Kristian]’s blog post, but the final report for the project (PDF) is where the real detail is.

If you’re interested mainly in seeing a machine pound out flawless victories, the video below shows everything running smoothly. The pounding sounds make it seem like the screen is taking a lot of abuse, but [Kristian] mentions that’s actually noise from the solenoids and not a product of them battling the touchscreen. This setup can be easily adapted to test out apps on different models of phones — something that has historically cost quite a bit of dough.

If you’re interested in the nitty-gritty details of the reasons and methods used for the computer vision portions, be sure to go through [Kristian]’s github repository where everything about the project lives (including the aforementioned final report.)

Continue reading “Abusing A Cellphone Screen With Solenoids Posts High Score”

The Most Immersive Pinball Machine: Project Supernova

Over at [Truthlabs], a 30 year old pinball machine was diagnosed with a major flaw in its game design: It could only entertain one person at a time. [Dan] and his colleagues set out to change this, transforming the ol’ pinball legend “Firepower” into a spectacular, immersive gaming experience worthy of the 21st century.

A major limitation they wanted to overcome was screen size. A projector mounted to the ceiling should turn the entire wall behind the machine into a massive 15-foot playfield for anyone in the room to enjoy.

 

With so much space to fill, the team assembled a visual concept tailored to blend seamlessly with the original storyline of the arcade classic, studying the machine’s artwork and digging deep into the sci-fi archives. They then translated their ideas into 3D graphics utilizing Cinema4D and WebGL along with the usual designer’s toolbox. Lasers and explosions were added, ready to be triggered by game interactions on the machine.

pinnball-ocr-comp

To hook the augmentation into the pinball machine’s own game progress, they elaborated an elegant solution, incorporating OpenCV and OCR, to read all five of the machine’s 7 segment displays from a single webcam. An Arduino inside the machine taps into the numerous mechanical switches and indicator lamps, keeping a Node.js server updated about pressed buttons, hits, the “Lange Change” and plunged balls.

The result is the impressive demonstration of both passion and skill you can see in the video below. We really like the custom shader effects. How could we ever play pinball without them?

Continue reading “The Most Immersive Pinball Machine: Project Supernova”

Raspberry Pi As Speed Camera

Wherever you stand on the topics of road safety and vehicle speed limits it’s probably fair to say that speed cameras are not a universally popular sight on our roads. If you want a heated argument in the pub, throw that one into the mix.

But what if you live in a suburban street used as a so-called “rat run” through route, with drivers regularly flouting the speed limit by a significant margin. Suddenly the issue becomes one of personal safety, and all those arguments from the pub mean very little.

Sample car speed measurements
Sample car speed measurements

[Gregtinkers]’ brother-in-law posted a message on Facebook outlining just that problem, and sadly the local police department lacked the resources to enforce the limit. This set [Gregtinkers] on a path to document the scale of the problem and lend justification to police action, which led him to use OpenCV and the Raspberry Pi camera to make his own speed camera.

The theory of operation is straightforward, the software tracks moving objects along the road in the camera’s field of view, times their traversal, and calculates the resulting speed. The area of the image containing the road is defined by a bounding box, to stop spurious readings from birds or neighbours straying into view.

He provides installation and dependency instructions and a run-down of the software’s operation in his blog post, and the software itself is available on his GitHub account.

We’ve had a lot of OpenCV-based projects but haven’t featured a speed camera before here on Hackaday. But we have had a couple of dubious countermeasures, like that humorous attempt at an SQL injection attack, or a flash-based countermeasure.