Reading Bingo Balls With Microcontrollers

Every once in a while a project comes along with that magical power to consume your time and attention for many months. When you finally complete it, you feel sorry that you don’t have to do anything more.

What is so special about this Bingo ball reader? It may seem like an ordinary OCR project at first glance; a camera captures the image and OCR software recognizes the number. Simple as that. And it works without problems, like every simple gadget should.

But then again, maybe it’s not that simple. Numbers are scattered all over the ball, so they have to be located first, and the best candidate for reading must be selected. Then, numbers are painted onto a sphere rather than a flat surface, sometimes making them deformed to the point where their shape has to be recovered first. Also, the angle of reading is not fixed but somewhere on a 360° scale. And then we have the glare problem to boot, as Bingo balls are so shiny that every light source reflects as a saturated bright spot.

So, is that all of it? Well, almost. The task is supposed to be performed by an embedded microcontroller, with limited speed and memory, yet the recognition process for one ball has to be fast — 500 ms at worst. But that’s just one part of the process. The project includes the pipelined mechanism which accepts the ball, transports it to be scanned by the OCR and then shot by the public broadcast camera before it gets dumped. And finally, if the reading was not reliable enough, the ball has to be subtly rotated so that the numbers would be repositioned for another reading attempt.

Despite these challenges I did manage to build this system. It’s fast and reliable, and I discovered some very interesting tricks along the way. Take a look at the quick demo video below to get a feel for the speed, and what the system “sees”. Then join me after the break to dive into the details of this interesting embedded build.

Continue reading “Reading Bingo Balls With Microcontrollers”

Objectifier: Director Of Domestic Technology

book-example[Bjørn Karmann]’s Objectifier is a device that lets you control domestic objects by allowing them to respond to unique actions or behaviour, using machine learning and computer vision. The Objectifier can turn on a table lamp when you open a book, and turn it off when you close the book. Switch on the coffee maker when you place the mug next to the pot, and switch it off when the mug is removed. Turn on the belt sander when you put on the safety glasses, and stop it when you remove the glasses. Charge the phone when you put a banana in front of it, and stop charging it when you place an apple in front of it. You get the drift — the possibilities are endless. Hopefully, sometime in the (near) future, we will be able to interact with inanimate objects in this fashion. We can get them to learn from our actions rather than us learning how to program them.

The device uses computer vision and a neural network to learn complex behaviours associated with your trigger commands. A training mode, using a phone app, allows you to train it for the On and Off actions. Some actions require more human effort in training it — such as detecting an open and closed book — but eventually, the neural network does a fairly good job.

The current version is the sixth prototype in the series and [Bjørn] has put in quite a lot of work refining the project at each stage. In its latest avatar, the device hardware consists of a Pi Zero, a Raspberry-Pi camera module, an SMPS power brick, a relay block to switch the output, a 230 V plug for input power and a 230 V socket outlet for the final output. All the parts are put together rather neatly using acrylic laser cut support pieces, and then further enclosed in a nice wooden enclosure.

On the software side, all of the machine learning part is taken care of using “Wekinator” — a free, open source software that allows building musical instruments, gestural game controllers, computer vision or computer listening systems using machine learning. The computer vision is handled via Processing. All the code is wrapped using openframeworks, with ml4A providing apps for working with machine learning.

All of the above is what we could deduce looking at the pictures and information on his blog post. There isn’t much detail about the hardware, but the pictures are enough to tell us all. The software isn’t made available, but maybe this could spur some of you hackers into action to build another version of the Objectifier. Check out the video after the break, showing humans teaching the Objectifier its tricks.

Continue reading “Objectifier: Director Of Domestic Technology”

Arduino Video Isn’t Quite 4K

Video resolution is always on the rise. The days of 640×480 video have given way to 720, 1080, and even 4K resolutions. There’s no end in sight. However, you need a lot of horsepower to process that many pixels. What if you have a small robot powered by a microcontroller (perhaps an Arduino) and you want it to have vision? You can’t realistically process HD video, or even low-grade video with a small processor. CORTEX systems has an open source solution: a 7 pixel camera with an I2C interface.

The files for SNAIL Vision include a bill of materials and the PCB layout. There’s software for the Vishay sensors used and provisions for mounting a lens holder to the PCB using glue. The design is fairly simple. In addition to the array of sensors, there’s an I2C multiplexer which also acts as a level shifter and a handful of resistors and connectors.

Continue reading “Arduino Video Isn’t Quite 4K”

Ping Pong Ball-Juggling Robot

There aren’t too many sports named for the sound that is produced during the game. Even though it’s properly referred to as “table tennis” by serious practitioners, ping pong is probably the most obvious. To that end, [Nekojiru] built a ping pong ball juggling robot that used those very acoustics to pinpoint the location of the ball in relation to the robot. Not satisfied with his efforts there, he moved onto a visual solution and built a new juggling rig that uses computer vision instead of sound to keep a ping pong ball aloft.

The main controller is a Raspberry Pi 2 with a Pi camera module attached. After some mishaps with the planned IR vision system, [Nekojiru] decided to use green light to illuminate the ball. He notes that OpenCV probably wouldn’t have worked for him because it’s not fast enough for the 90 fps that’s required to bounce the ping pong ball. After looking at the incoming data from this system, an algorithm extracts 3D information about the ball and directs the paddle to strike the ball in a particular way.

If you’ve ever wanted to get into real-time object tracking, this is a great project to look over. The control system is well polished and the robot itself looks almost professionally made. Maybe it’s possible to build something similar to test [Nekojiru]’s hypothesis that OpenCV isn’t fast enough for this. If you want to get started in that realm of object tracking, there are some great projects that make use of that piece of software as well.

CES2017: Astrophotography In The Eyepiece

If you’ve never set up a telescope in your back yard, you’ve never been truly disappointed. The Hubble can take some great shots of Saturn, nebulae, and other astronomical phenomena, but even an expensive backyard scope produces only smudges. To do astronomy properly, you’ll spend your time huddled over a camera and a computer, stacking images to produce something that almost lives up to your expectations.

At CES, Unistellar introduced a device designed to fit over the eyepiece of a telescope to do all of this for you.

According to the guys at Unistellar, this box contains a small Linux computer, camera, GPS, and an LCD. Once the telescope is set up, the module takes a few pictures of the telescope’s field of view, stacks the images, and overlays the result in the eyepiece. Think of this as ‘live’ astrophotography.

In addition to making Jupiter look less like a Great Red Smudge, the Unistellar module adds augmented reality; it knows where the telescope is pointing and will add a label if you’re looking at any astronomical objects of note.

While I wasn’t able to take a look inside this extremely cool device, the Unistellar guys said they’ll be launching a crowdfunding campaign in the near future.

The Story Of Kickstarting The OpenMV

Robots are the ‘it’ thing right now, computer vision is a hot topic, and microcontrollers have never been faster. These facts lead inexorably to the OpenMV, an embedded computer vision module that bills itself as the ‘Arduino of Machine Vision.’

The original OpenMV was an entry for the first Hackaday Prize, and since then the project has had a lot of success. There are tons of followers, plenty of users, and the project even had a successful Kickstarter. That last bit of info is fairly contentious — while the Kickstarter did meet the minimum funding level, there were a lot of problems bringing this very cool product to market. Issues with suppliers and community management were the biggest problems, but the team behind OpenMV eventually pulled it off.

At the 2016 Hackaday SuperConference, Kwabena Agyeman, one of the project leads for the OpenMV, told the story about bringing the OpenMV to market:

Continue reading “The Story Of Kickstarting The OpenMV”

Simon Says Smile, Human!

The bad news is that when our robot overlords come to oppress us, they’ll be able to tell how well they’re doing just by reading our facial expressions. The good news? Silly computer-vision-enhanced party games!

[Ricardo] wrote up a quickie demonstration, mostly powered by OpenCV and Microsoft’s Emotion API, that scores your ability to mimic emoticon faces. So when you get shown a devil-with-devilish-grin image, you’re supposed to make the same face convincingly enough to fool a neural network classifier. And hilarity ensues!

Continue reading “Simon Says Smile, Human!”