New Contest: Train All The Things

The old way was to write clever code that could handle every possible outcome. But what if you don’t know exactly what your inputs will look like, or just need a faster route to the final results? The answer is Machine Learning, and we want you to give it a try during the Train All the Things contest!

It’s hard to find a more buzz-worthy term than Artificial Intelligence. Right now, where the rubber hits the road in AI is Machine Learning and it’s never been so easy to get your feet wet in this realm.

From an 8-bit microcontroller to common single-board computers, you can do cool things like object recognition or color classification quite easily. Grab a beefier processor, dedicated ASIC, or lean heavily into the power of the cloud and you can do much more, like facial identification and gesture recognition. But the sky’s the limit. A big part of this contest is that we want everyone to get inspired by what you manage to pull off.

Yes, We Do Want to See Your ML “Hello World” Too!

Wait, wait, come back here. Have we already scared you off? Don’t read AI or ML and assume it’s not for you. We’ve included a category for “Artificial Intelligence Blinky” — your first attempt at doing something cool.

Need something simple to get you excited? How about Machine Learning on an ATtiny85 to sort Skittles candy by color? That uses just one color sensor for a quick and easy way to harvest data that forms a training set. But you could also climb up the ladder just a bit and make yourself a camera-based LEGO sorter or using an IMU in a magic wand to detect which spell you’re casting. Need more scientific inspiration? We’re hoping someday someone will build a training set that classifies microscope shots of micrometeorites. But we’d be equally excited with projects that tackle robot locomotion, natural language, and all the other wild ideas you can come up with.

Our guess is you don’t really need prizes to get excited about this one… most people have been itching for a reason to try out machine learning for quite some time. But we do have $100 Tindie gift certificates for the most interesting entry in each of the four contest categories: ML on the edge, ML on the gateway, AI blinky, and ML in the cloud.

Get started on your entry. The Train All The Things contest is sponsored by Digi-Key and runs until April 7th.

Generating Beetles From Public Domain Images

Ever since [Ian Goodfellow] and his colleagues invented the generative adversarial network (GAN) in 2014, hundreds of projects, from style transfers to poetry generators, have been produced using the concept of contesting neural networks. Unlike traditional neural networks, GANs can generate new data that fits statistically within the same set as the training set.

[Bernat Cuni], the one-man design team behind [cunicode] came up with the idea to generate beetles using this technique. Inspired by material published on Machine Learning for Artists, he decided to deploy some visual experiments with zoological illustrations. The training data was found from a public domain book hosted at archive.org, found through the Biodiversity Heritage Library. A combination of OpenCV and ImageMagick helped with individually extracting illustrations to squared images.

[Cuni] then ran a DCGAN with the data set, generating the first set of quasi-beetles after some tinkering with epochs and settings. After the failed first experiment, he went with StyleGAN, setting up a machine at PaperSpace with 1 GPU and running the training for >3 days on 128 px images. The results were much better, but fairly small and the cost of running the machine was quite expensive (>€125).

Given the success of the previous experiment, he decided to transfer over to Google CoLab, using their 12 hours of K80 GPU per run for free to generate some more beetles. With the intent on producing more HD beetles, he used Runway trained on 1024 px beetles, discovering much better results after 3000 steps. The model was moved over to Google CoLab to produce HD outputs.

He has since continued to experiment with the beetles, producing some confusing generated images and fun collectibles.

Continue reading “Generating Beetles From Public Domain Images”

It Turns Out, Robots Need Tough Love Too

Showing robots adversarial behavior may be the key to improving their performance, according to a study conducted by the University of Southern California. While a generative adversarial network (GAN), where two neural networks compete in a game, has been demonstrated, this is the first time adversarial human users have been used in a learning effort.

The report was presented at the International Conference on Intelligent Robots and Systems, describing the experiment in which reinforcement learning was used to train robotic systems to create a general purpose system. For most robots, a huge amount of training data is necessary in order to manipulate objects in a human-like way.

A line of research that has been successful in overcoming this problem is having a “human in the loop”, in which a human provides feedback to the system in regards to its abilities. Most algorithms have assumed a cooperating human assistant, but by acting against the system the robot may be more inclined to develop robustness towards real world complexities.

The experiment that was conducted involved a robot attempting to grasp an object in a computer simulation. The human observer observes the simulated grasp and attempts to snatch the object away from the robot if the grasp is successful. This helps the robot discern weak and firm grasps, a crazy idea from the researchers that managed to work. The system trained with the adversary rejected unstable grasps, quickly learning robust grasps for different objects.

Experiments like these can test the assumptions made in the learning task for robotic applications, leading to better stress-tested systems more inclined to work in real-world situations. Take a look at the interview in the video below the break.

Continue reading “It Turns Out, Robots Need Tough Love Too”

Tiny Machine Learning On The Attiny85

We tend to think that the lowest point of entry for machine learning  (ML) is on a Raspberry Pi, which it definitely is not. [EloquentArduino] has been pushing the limits to the low end of the scale, and managed to get a basic classification model running on the ATtiny85.

Using his experience of running ML models on an old Arduino Nano, he had created a generator that can export C code from a scikit-learn. He tried using this generator to compile a support-vector colour classifier for the ATtiny85, but ran into a problem with the Arduino ATtiny85 compiler not supporting a variadic function used by the generator. Fortunately he had already experimented with an alternative approach that uses a non-variadic function, so he was able to dust that off and get it working. The classifier accepts inputs from an RGB sensor to identify a set of objects by colour. The model ended up easily fitting into the capabilities of the diminutive ATtiny85, using only 41% of the available flash and 4% of the available ram.

It’s important to note what [EloquentArduino] isn’t doing here: running an artificial neural network. They’re just too inefficient in terms of memory and computation time to fit on an ATtiny. But neural nets aren’t the only game in town, and if your task is classifying something based on a few inputs, like reading a gesture from accelerometer data, or naming a color from a color sensor, the approach here will serve you well. We wonder if this wouldn’t be a good solution to the pesky problem of identifying bats by their calls.

We really like how approachable machine learning has become and if you’re keen to give ML a go, have a look at the rest of the EloquentArduino blog, it’s a small goldmine.

We’re getting more and more machine learning related hacks, like basic ML on an Arduino Uno, and Lego sortings using ML on a Raspberry Pi.

Hackaday Links Column Banner

Hackaday Links: December 29, 2019

The retrocomputing crowd will go to great lengths to recreate the computers of yesteryear, and no matter which species of computer is being restored, getting it just right is a badge of honor in the community. The case and keyboard obviously playing a big part in that look, so when a crowdfunding campaign to create new keycaps for the C64 was announced, Commodore fans jumped to fund it. Sadly, more than four years later, the promised keycaps haven’t been delivered. One disappointed backer, Jim Drew, decided he was sick of waiting, so he delved into the world of keycaps injection molding and started his own competing campaign. Jim details his adventures in his Kickstarter Indiegogo campaign, which makes for good reading even if you’re not into Commodore refurbishment. Here’s hoping Jim has better luck than the competition did.

Looking for anonymity in our increasingly surveilled world? You’re not alone, and in fact, we predict facial recognition spoofing products and methods will be a growth industry in the new decade. Aside from the obvious – and often illegal – approach of wearing a mask that blocks most of the features machine learning algorithms use to quantify your face, one now has another option, in the form of a colorful pattern that makes you invisible to the YOLOv2 algorithm. The pattern, which looks like a soft-focus crowd scene rendered in Mardi Gras colors, won’t make the algorithm think you’re someone else, but it will prevent you from being classified as a person. It won’t work with any other AI algorithm, but it’s still an interesting phenomenon.

We saw a great hack come this week about using an RTL-SDR to track down a water leak. Clayton’s water bill suddenly skyrocketed, and he wanted to track down the source. Luckily, his water meter uses the encoder receive-transmit (ERT) protocol on the 900 MHz ISM band to report his usage, so he threw an SDR dongle and rtlamr at the problem. After logging his data, massaging it a bit with some Python code, and graphing water consumption over time, he found that water was being used even when nobody was home. That helped him find the culprit – leaky flap valves in the toilets resulting in a slow drip that ran up the bill. There were probably other ways to attack the problem, but we like this approach just fine.

Are your flex PCBs making you cry? Friend of Hackaday Drew Fustini sent us a tip on teardrop pads to reduce the mechanical stress on traces when the board flexes. The trouble is that KiCad can’t natively create teardrop pads. Thankfully an action plugin makes teardrops a snap. Drew goes into a bit of detail on how the plugin works and shows the results of some test PCBs he made with them. It’s a nice trick to keep in mind for your flexible design work.

Lego Machine Uses Machine Learning To Sort Itself Out

In our opinion, the primary evidence of a properly lived childhood is an enormous box of every conceivable Lego piece, from simple bricks to girders and gears, all with a small town’s worth of minifigs swimming through it. It takes years of birthdays and Christmases to accumulate a Lego collection best measured by the pound, but like anything worth doing, it’s worth overdoing.

But what to do with such a collection? Digging through it to find Just the Right Piece™ can be frustrating, and bringing order to the chaos with manual sorting is just so impractical. How about putting some of those bricks to work with a machine-vision Lego sorter built from Lego?

[Daniel West]’s approach is hardly new – we’ve even featured brick-built Lego sorters before – but we’re impressed by its architecture. First, the mechanical system is amazing. It uses a series of conveyors to transport bricks from a hopper, winnowing the stream down as it goes. The final step is a vibratory feeder that places one piece on a conveyor at a time. Those pass under a camera attached to a Raspberry Pi, where OpenCV does background subtraction from the video stream, applies bounding boxes to the parts, and runs the images through a convolutional neural network (CNN) that’s been trained on a database of every Lego part. Servo-controlled gates then direct the parts into one of 18 bins. See it in action in the video below.

We must admit that we’re not sure what the sorting criteria are, as some bins seem nearly as chaotic as the input mix. Still, we appreciate the fine engineering, and award extra style points for all the Lego goodness.

Continue reading “Lego Machine Uses Machine Learning To Sort Itself Out”

AI Knows If The Pitch Is On Target Before You Do

Pitching a baseball is about accuracy and speed. A swift ball on target is the goal, allowing the pitcher to strike out the batter. [Nick Bild] created an AI system that can determine a ball’s trajectory in mid-flight, based on a camera feed.

The system uses an NVIDIA Jetson AGX Xavier, fitted with a USB camera running at 100FPS. A Nerf tennis ball launcher is used to fire a ball towards the batter. Once triggered, the AI uses the camera to capture two successive images of the ball in flight. These images are fed into a convolutional neural network (CNN), and the software determines whether the ball is heading for the strike zone, or moving off-target. It uses this information to light a green or red LED respectively to alert the batter.

While such a system is unlikely to appear in professional baseball anytime soon, it shows the sheer capability of neural network systems to quickly and effectively analyse data in ways simply impossible for mere humans. [Nick]’s future goals involve running the system on faster hardware, and expanding it to determine effects like spin and more accurate positioning within the strike zone.

We’ve seen CNNs do everything from naming tomatoes to finding parking spaces. Video after the break.

Continue reading “AI Knows If The Pitch Is On Target Before You Do”