Using OpenCV with the Raspberry Pi

opencv

When we first heard of the Raspberry Pi we were elated that projects that once required a full-blown computer could now be done on a tiny, and cheap board running Linux. Unfortunately, we haven’t seen much in the way of using computer vision algorithms on the Raspi, but thanks to [Lentin] the world of OpenCV is now accessable to Raspberry Pi users everywhere.

[Lentin] didn’t feel like installing OpenCV from its source, a process that takes the better part of a day. Instead, he installed it using the synaptic package manager. After connecting a webcam, [Lentin] ssh’d into his Raspi and installed a face detection example script that comes with OpenCV.

It should be noted that [Lentin]‘s install of OpenCV isn’t exactly fast, but for a lot of projects being able to update a face tracker five times a second is more than enough. Once the Raspberry Pi camera module is released the speed of face detection on a Raspi should increase dramatically, though, leading to even more useful computer vision builds with the Raspberry Pi.

Quantifying Cloudiness with OpenCV

What Can I see From the Shard?

The Shard is the tallest building in Western Europe, and has a great view of London.  The condos in the building are very expensive, and a tourist ride to the top of the building costs £24.95.

Since the value of the view is so high, [Willem] wanted to quantify the quality of the view at any given time. His solution is the Shard Rain Cam. This device combines a Logitech webcam with a Raspberry Pi to capture a time-lapse set of images. These images are fed to a Python script using OpenCV which quantifies the cloudiness.

[Willem] also had to build a weatherproof enclosure with a transparent window for the camera and RPi. ‘Clingfilm’, which is British for saran wrap, and mineral oil is used to improve the waterproofing of an IP54 rated enclosure.

The resulting data is displayed on www.whatcaniseefromtheshard.com, which provides an indication of whether or not the view is worth £24.95. All of the Python code is available, and is a good starting point for learning about image processing with OpenCV.

Finding 1s and 0s with a microscope and computer vision

ROM

One day, [Adam] was asked if he would like to take part in a little project. A mad scientist come engineer at [Adam]‘s job had just removed the plastic casing from a IC, and wanted a little help decoding the information on a masked ROM. These ROMs are basically just data etched directly into silicon, so the only way to actually read the data is with some nitric acid and a microscope. [Adam] was more than up for the challenge, but not wanting to count out thousands of 1s and 0s etched into a chip, he figured out a way to let a computer do it with some clever programming and computer vision.

[Adam] has used OpenCV before, but the macro image of the masked ROM had a lot of extraneous information; there were gaps in the columns of bits, and letting a computer do all the work would result in crap data. His solution was to semi-automate the process of counting 1s and 0s by selecting a grid by hand and letting image processing software do the rest of the work.

This work resulted in rompar, a tool to decode the data on de-packaged ROMs. It works very well – [Adam] was able to successfully decode the ROM and netted the machine codes for the object of his reverse engineering.

Giving the Hexbug Spider a set of eyes

spider

The Hexbug Spider is a neat little robot toy available at just about any Target or Walmart for about $20. With a few extra parts, though, it can become a vastly more powerful robotics platform, as [eric] shows us with his experiments with a Hexbug and OpenCV.

Previously, we’ve seen [eric] turn a Hexbug spider into a line following robot with a pair of IR LEDs and a drop-in replacement motor driver. This time, instead of a few LEDs, [eric] turned to an Android smartphone running an OpenCV-based app.

The smartphone app detects a user-selectable hue – in this case a little Android toy robot – and sends commands to the MSP430-powered motor control board over the headphone jack to move the legs. It’s a neat build, and surprisingly nimble for a $20 plastic hexapod robot.

You can see the OpenCV-controlled Hexbug in action after the break, along with a video build log with [eric] showing everyone how to tear apart one of these robot toys.

[Read more...]

Reading piano rolls without a player piano

detection-example

A while back, [Jacob] played around with a player piano. After feeding a roll into the machine and trying to figure out how a fifty year old machine using hundred year old technology can replicate a skilled pianist, he decided to take a crack at decoding piano rolls for himself. He came up with a clever way of doing it over Christmas break, using a camera and a few bits of OpenCV.

The old-school mechanics of a player piano use a bellows and valve system to suck air through dozens of holes, making the action hit a string whenever a hole is present in the piano roll. To bring this mechanism into the modern age, [Jacob] pointed a video camera at the active part of the piano roll and used OpenCV to translate holes in a piece of paper to a MIDI file.

The synthesized version sounds just as good as the original paper scroll-based version, as seen in the video after the break. There are a few sync issues in the video and the resulting MIDI file isn’t in the right key, but that’s easily fixed by anyone willing to replicate this project.

[Read more...]

Pixar-style lamp project is a huge animatronics win

pixar-lamp-animated-procedurally

Even with the added hardware that lamp still looks relatively normal. But its behavior is more than remarkable. The lamp interacts with people in an incredibly lifelike way. This is of course inspired by the lamp from Pixar’s Luxo Jr. short film. But there’s a little bit of most useless machine added just for fun. If you try to shut it off the lamp shade is used to flip that switch on the base back on.

[Shanshan Zhou], [Adam Ben-Dror], and [Joss Doggett] developed the little robot as a class project at the Victoria University of Wellington. It uses six servo motors driven by an Arduino to give the inanimate object the ability to move as if it’s alive. There is no light in the lamp as the bulb has been replaced by a webcam. The image is monitored using OpenCV to include face tracking as one of the behaviors. All of the animations are procedural, making use of Processing to convey movement instructions to the Arduino board.

Do not miss seeing the video embedded after the break.

[Read more...]

Face tracking with an Android device

This Android device can recognize faces and move to keep them in frame. It’s a proof of concept that uses commonly available parts and software packages.

The original motivation for the project was [Dan O's] inclination to give the OpenCV software a try. OpenCV is an Open Source Computer Vision package that takes on the brunt of the job when it comes to discerning meaning from images. To give the phone the power to move he designed and printed his own mounting brackets for the phone and a couple of hobby servos. An IOIO board connects to the Android device in order to control the motors. On the software side all [Dan] needed to do was write some code to interface the output of the OpenCV face tracking modules with the input of the IOIO. See the finished project demonstration after the jump.

This system can easily be implemented with other hardware, like this Arduino-based version we looked at earlier in the year.

[Read more...]