OpenCV Spreads Smart Camera Joy To See Ideas Come To Life

Do you have a great application for computer vision, but couldn’t spare the cost of hardware needed to build it? Or perhaps you just need a deadline to pull you away from endless doom scrolling? Either way, the OpenCV team wants you to enter their OpenCV AI Competition 2021 and they’re willing to pitch in hardware to make it happen.

This competition is part of OpenCV’s 20th anniversary celebration, and the field of machine vision has changed a lot in those two decades. OpenCV started within Intel harnessing power of their high end CPUs, but today the excitement is around specialized acceleration hardware for vision processing. Which is why OpenCV put their support and lent their name to the OpenCV AI Kit (OAK) Kickstarter we covered a few months ago. Since then, the hardware was produced and starting to arrive in project backer’s hands. (Barring pandemic-related shipping restrictions…)

This shiny new hardware is the competition’s focus. Phase one solicits team proposals for putting an OAK-D’s power to novel use. University teams may have up to ten members, general teams are limited to four. Each team’s geographic home will put them in one of six global regions. Proposals must be submitted by January 27th, 2021. By February 11th, judges will select the best twenty-five general and ten university team proposals from each region, and every member of the team gets an OAK-D unit to turn their idea into reality by phase two deadline of June 27th. That’s up to 1,200 OAK-D modules available to anyone who can convince the judges they have a great idea and they are capable of bringing it to fruition. Is that you? Of course it is!

Teams will also receive additional resources such as an allotment of cloud compute credits to train their models, and naturally all tutorials and sample code released as part of OAK Kickstarter. No explicit resource for project team organization is mentioned, but of course our own Hackaday.io is available to support you. Best of luck to everyone who enters and we look forward to seeing all the projects this contest will bring to life.

Webcam Heart Rate Monitor Brings Photoplethysmography To Your PC

It seems like within the last ten years, every other gadget to be released has some sort of heart rate monitoring capability. Most modern smartwatches can report your BPMs, and we’ve even seen some headphones with the same ability hitting the market. Most of these devices use an optical measurement method in which skin is illuminated (usually by an LED) and a sensor records changes in skin color and light absorption. This method is called Photoplethysmography (PPG), and has even been implemented (in a simple form) in smartphone apps in which the data is generated by video of your finger covering the phone camera.

The basic theory of operation here has its roots in an experiment you probably undertook as a child. Did you ever hold a flashlight up to your hand to see the light, filtered red by your blood, shine through? That’s exactly what’s happening here. One key detail that is hard to perceive when a flashlight is illuminating your entire hand, however, is that deoxygenated blood is darker in color than oxygenated blood. By observing the frequency of the light-dark color change, we can back out the heart rate.

This is exactly how [Andy Kong] approached two methods of measuring heart rate from a webcam.

Method 1: The Cover-Up

The first detection scheme [Andy] tried is what he refers to as the “phone flashlight trick”. Essentially, you cover the webcam lens entirely with your finger. Ambient light shines through your skin and produces a video stream that looks like a dark red rectangle. Though it may be imperceptible to us, the color changes ever-so-slightly as your heart beats. An FFT of the raw data gives us a heart rate that’s surprisingly accurate. [Andy] even has a live demo up that you can try for yourself (just remember to clean the smudges off your webcam afterwards).

Method 2: Remote Sensing

Now things are getting a bit more advanced. What if you don’t want to clean your webcam after each time you measure your heart rate? Well thankfully there’s a remote sensing option as well.

For this method, [Andy] is actually using OpenCV to measure the cyclical swelling and shrinking of blood vessels in your skin by measuring the color change in your face. It’s absolutely mind-blowing that this works, considering the resolution of a standard webcam. He found the most success by focusing on fleshy patches of skin right below the eyes, though he says others recommend taking a look at the forehead.

Every now and then we see something that works even though it really seems like it shouldn’t. How is a webcam sensitive enough to measure these minute changes in facial color? Why isn’t the signal uselessly noisy? This project is in good company with other neat heart rate measurement tricks we’ve seen. It’s amazing that this works at all, and even more incredible that it works so well.

Computer Vision Maps Christmas Lights

There’s a small but dedicated group of folks out there who spend all year planning their Christmas decorations. These aren’t simple lawn ornaments or displays, either, but have evolved into complex lightning performances that require quite a bit of computer control. For some things, hooking up a relay to a microcontroller can get the job done, but [Andy] has turned to computer vision to solve some of the more time-consuming aspects of these displays.

Specifically, [Andy] has a long string of programmable RGB LED lights to wrap around a Christmas tree, but didn’t want to spend time manually mapping out each light’s location. So he used OpenCV to register the locations of the LEDs from three different camera angles, and then used a Python script to calculate their position in the 3D space. This means that he will easily be able to take the LEDs down at the end of the holidays and string them back up next year without having to do the tedious manual mapping ever again.

While [Andy] notes that he may have spent more time writing the software to map out the LEDs than manually doing it himself, but year-after-year it may save him a lot of time and effort, not to mention the benefits of a challenge like writing this software in the first place. If you want to get started on your own display this year, all you really need is some lights and a MIDI controller.

Making Music With A Go Board Step Sequencer

Ever wonder what your favorite board game sounds like? Neither did we. Thankfully [Sara Adkins] did, and created a step sequencer called Let’s Go that uses the classic board game Go as input.

In the game Go, two players place black and white tokens on a grid, vying for control of the board. As the game progresses, the configuration of game pieces gets more complex and coincidentally begins to resemble Conway’s Game of Life (or a weird QR Code). Sara saw music in the evolving arrangement of circles and transformed the ancient board game into a modern instrument so others could hear it too.

To an observer, [Sara’s] adaptation looks fairly indistinguishable from the version played in China 2,500 years ago — with the exception of an overhead webcam and nearby laptop, of course. The laptop uses OpenCV to digitize the board layout. It feeds that information via Open Sound Control (OSC) into popular music creation software Max MSP (though an open-source version could probably be implemented in Pure Data), where it’s used to control a step sequencer. Each row on the board represents an instrumental voice (melodic for white pieces, percussive for black ones), and each column corresponds to a beat.

Every new game is a new piece of music that starts out simple and gradually increases in complexity. The music evolves with the board, and adds a new dimension for players to interact with the game. If you want to try it out yourself, [Sara] has the project fully documented on her website, and all of the code is available on GitHub. Now we’re just left wondering what other games sound like — [tinkartank] already answered that question for chess, but what about Settlers of Catan?

Continue reading “Making Music With A Go Board Step Sequencer”

OCR Reads Old Newspapers So We Don’t Have To

Plenty of people don’t bother to read the current newspaper, let alone editions that were published over 100 years ago. But there’s a wealth of important historical information buried in these dusty old publications, assuming you can find a way to reliably digitize and index it all. You might think the solution is as simple as running images of the paper through optical character recognition (OCR) software, but as [John Scancella] explains, the problem is a bit more complicated than that.

Stretching the text vertically highlights the columns.

Ultimately, the issue largely comes down to formatting. The OCR software reasonably assumes all the text is in orderly horizontal lines, because in the vast majority of cases, it would be. That’s how you’re reading these words now. But as anyone who’s seen an old time newspaper knows, that’s not how things were necessarily written back then. Pages consisted of multiple narrow columns of stories separated by vertical lines; if the OCR tries to read the page from left to right, the resulting text is a mishmash of several unrelated topics.

The answer is to break all those articles into their own images, but doing that manually at any sort of scale simply isn’t an option. So [John] has been working on a system that uses OpenCV to identify the columns of text and isolate them. He details the multi-step process down in his write-up, and even provides the Python code should you want to give it a spin. But the short version is that the image is converted to grayscale and the OpenCV dilate function is used to stretch the text in the Y dimension. This produces big blobs of white that can easily be picked out with findContours() and snipped into individual images.

It’s not a perfect solution, and there are still a few pitfalls. For one, the name of the paper needs to be removed from the front page before the stretching operation happens. But it’s clearly a step in the right direction, and the results certainly look very promising. Anything that makes OCR more accurate or easier to implement is a win in our book, so we’re excited to see where [John] takes this concept.

OAK Vision Modules Help You See The Forest And The Trees

OpenCV is an open source library of computer vision algorithms, its power and flexibility made many machine vision projects possible. But even with code highly optimized for maximum performance, we always wish for more. Which is why our ears perk up whenever we hear about a hardware accelerated vision module, and the latest buzz is coming out of the OpenCV AI Kit (OAK) Kickstarter campaign.

There are two vision modules launched with this campaign. The OAK-1 with a single color camera for two dimensional vision applications, and the OAK-D which adds stereo cameras for that third dimension. The onboard brain is a Movidius Myriad X processor which, according to team members who have dug through its datasheet, have been massively underutilized in other products. They believe OAK modules will help the chip fulfill its potential for vision applications, delivering high performance while consuming low power in a small form factor. Reading over the spec sheet, we think it’s fair to call these “Ultimate Myriad X Dev Boards” but we must concede “OpenCV AI Kit” sounds better. It does not provide hardware acceleration for the entire OpenCV library (likely an impossible task) but it does cover the highly demanding subset suitable for Myriad X acceleration.

Since the campaign launched a few weeks ago, some additional information have been released to help assure backers that this project has real substance. It turns out OAK is an evolution of a project we’ve covered almost exactly one year ago that became a real product DepthAI, so at least this is not their first rodeo. It is also encouraging that their invitation to the open hardware community has already borne fruit. Check out this thread discussing OAK for robot vision, where a question was met with an honest “we don’t have expertise there” from the OAK team, but then ArduCam pitched in with their camera module experience to help.

We wish them success for their planned December 2020 delivery. They have already far surpassed their funding goals, they’ve shipped hardware before, and we see a good start to a development community. We look forward to the OAK-1 and OAK-D joining the ranks of other hacking friendly vision modules like OpenMV, JeVois, StereoPi, and AIY Vision.

Raspberry Pi Shuffler Is Computerized Card Shark

If you’re playing Texas Hold’em or other card games with a small group, you may get tired of shuffling over and over again. [3dprintedLife] was in just such a position, and realized there were no good automatic card shufflers in his budget. Instead, he elected to build one, and put in some extra functionality to corrupt the game to his whims.

The mechanicals of the machine took much development, as accurately handling and dispensing cards is a challenge, particularly with the loose tolerances of 3D printed parts. After developing a reliable transport mechanism, it was more than capable of shuffling a deck well with some basic commands.

However, the real magic comes from installing a camera and Raspberry Pi running OpenCV. This is capable of reading the value and suit of each card, and then stacking the deck in a particular order to suit the dealer’s wishes. It’s all controlled through a web interface and is capable of creating guaranteed wins in Blackjack and Texas Hold’em. Files are on Github for those eager to delve deeper into how the machine works.

The mechanism does such a beautiful job of shuffling, that your friends may not even notice the ruse. It goes to show that you should always have your wits about you when gambling with the aid of machines. Of course, if you wish only to create havoc, this Lego card machine gun may be more your speed. Video after the break.

[via Reddit]

Continue reading “Raspberry Pi Shuffler Is Computerized Card Shark”