LEGO bricks sorter

Sorting LEGO Is Like Making A Box Of Chocolates

Did you know that chocolate candy production and sorting LEGO bricks have something in common? They both use the same techniques for turning clumps of chocolates or bricks into individual ones moving down a conveyor belt. At least that’s what [Paco Garcia] found out when making his LEGO Sorter.

Sorting LEGO bricks using guidesHowever, he didn’t find that out right away. He first experimented with his own techniques, learning that if he fed bricks to his conveyor belt by dropping a batch of them in a line perpendicular to the direction of belt travel then no subsequent separation attempt of his worked. He then turned to [akiyuky’s] LEGO sorter for inspiration and dropped them onto the belt at an angle, ensuring that some bricks would be in front of others. A further trick he found is very well demonstrated in the chocolate sorting video below and shown in the image here. That is to use guides on the belt which serve to create speed differentials. Bricks move slower than the conveyor belt while pressed against a guide but when a brick leaves the guide, it accelerates to the speed of the conveyor belt, pulling away from the bricks still at the guide and thus separating them.

A further discovery had nothing to do with chocolate production, unless maybe for quality control. Once an individual brick had been separated out, it had to be classified. To do that he used Google’s Inception v3 neural network. But first, he had to retrain it for recognizing different types of LEGO bricks, something we’ve seen done before for use with recognizing playing cards. And to do the retraining, he needed many images of different bricks all separated into their different types. That’s where he came up with a clever trick. He used his own sorter for that. For example, to get a bunch of images of 1×1 bricks of different colors and orientations, he simply ran them through the sorter, saving the images to files and assigning them to the 1×1 brick class. He then used his desktop machine with a GeForce GT 730 GPU for the retraining, taking around 2.7 seconds per brick. For sorting though, he runs the trained neural network on a Raspberry Pi, taking 3.8 seconds for each brick. The resulting sorter works quite well, sorting with 89% accuracy. Watch it in action in the video below.
Continue reading “Sorting LEGO Is Like Making A Box Of Chocolates”

Train object recognizer for cards

Using TensorFlow To Recognize Your Own Objects

When the time comes to add an object recognizer to your hack, all you need do is choose from many of the available ones and retrain it for your particular objects of interest. To help with that, [Edje Electronics] has put together a step-by-step guide to using TensorFlow to retrain Google’s Inception object recognizer. He does it for Windows 10 since there’s already plenty of documentation out there for Linux OSes.

You’re not limited to just Inception though. Inception is one of a few which are very accurate but it can take a few seconds to process each image and so is more suited to a fast laptop or desktop machine. MobileNet is an example of one which is less accurate but recognizes faster and so is better for a Raspberry Pi or mobile phone.

Collage of images for card datasetYou’ll need a few hundred images of your objects. These can either be scraped from an online source like Google’s images or you get take your own photos. If you use the latter approach, make sure to shoot from various angles, rotations, and with different lighting conditions. Fill your background with various other things and even have some things partially obscuring your objects. This may sound like a long, tedious task, but it can be done efficiently. [Edje Electronics] is working on recognizing playing cards so he first sprinkled them around his living room, added some clutter, and walked around, taking pictures using his phone. Once uploaded, some easy-to-use software helped him to label them all in around an hour. Note that he trained on 24 different objects, which are the number of different cards you get in a pinochle deck.

You’ll need to install a lot of software and do some configuration, but he walks you through that too. Ideally, you’d use a computer with a GPU but that’s optional, the difference being between three or twenty-four hours of training. Be sure to both watch his video below and follow the steps on his Github page. The Github page is kept most up-to-date but his video does a more thorough job of walking you through using the software, such as how to use the image labeling program.

Why is he training an object recognizer on playing cards? This is just one more step in making a blackjack playing robot. Previously he’d done an impressive job using OpenCV, even though the algorithm handled non-overlapping cards only. Google’s Inception, however, recognizes partially obscured cards. This is a very interesting project, one which we’ll be keeping an eye on. If you have any ideas for him, leave them in the comments below.

Continue reading “Using TensorFlow To Recognize Your Own Objects”

Google’s Inception Sees This Turtle As A Gun; Image Recognition Camouflage

The good people at MIT’s Computer Science and Artificial Intelligence Laboratory [CSAIL] have found a way of tricking Google’s InceptionV3 image classifier into seeing a rifle where there actually is a turtle. This is achieved by presenting the classifier with what is called ‘adversary examples’.

Adversary examples are a proven concept for 2D stills. In 2014 [Goodfellow], [Shlens] and [Szegedy] added imperceptible noise to the image of a panda that from then on was classified as gibbon. This method relies on the image being undisturbed and can be overcome by zooming, blurring or rotating the image.

The applicability for real world shenanigans has been seriously limited but this changes everything. This weaponized turtle is a color 3D print that is reliably misclassified by the algorithm from any point of view. To achieve this, some knowledge about the classifier is required to generate misleading input. The image transformations, such as rotation, scaling and skewing but also color corrections and even print errors are added to the input and the result is then optimized to reliably mislead the algorithm. The whole process is documented in [CSAIL]’s paper on the method.

What this amounts to is camouflage from machine vision. Assuming that the method also works the other way around, the possibility of disguising guns (or anything else) as turtles has serious implications for automated security systems.

As this turtle targets the Inception algorithm, it should be able to fool the DIY image recognition talkbox that Hackaday’s own [Steven Dufresne] built.

Thanks to [Adam] for the tip.