A Soldering LightSaber For The Speedy Worker

We all have our preferences when it comes to soldering irons, and for [Marius Taciuc] the strongest of them all is for a quick heat-up. It has to be at full temperature in the time it takes him to get to work, or it simply won’t cut the mustard. His solution is a temperature controlled iron, but one with no ordinary temperature control. Instead of a normal feedback loop it uses a machine learning algorithm to find the quickest warm-up.

The elements he’s using have a thermocouple in series with the element itself, meaning that to measure the temperature the power must be cut to the element. This duty cycle can not be cut too short or the measurements become noisy, so under a traditional temperature control regimen there is a limit on how quickly it can be heated up. His approach is to turn it on full-time for a period without stopping to measure the temperature, only measuring after it has had a chance to heat up. The algorithm constantly learns how long to switch it on to achieve what temperature, and is able to interpolate to arrive at the desired reading. It’s a clever way to make existing hardware perform new tricks, and we like that.

He’s appeared on these pages quite a few times over the years, but perhaps you’d like to see the first version of the same hardware. Meanwhile watch the quick heat up in action with a fuller explanation in the video below.

Continue reading “A Soldering LightSaber For The Speedy Worker”

Silicone And AI Power This Prayerful Robotic Intercessor

Even in a world that is as currently far off the rails as this one is, we’re going to go out on a limb and say that this machine learning, servo-powered prayer bot is going to be the strangest thing you see today. We’re happy to be wrong about that, though, and if we are, please send links.

“The Prayer,” as [Diemut Strebe]’s work is called, may look strange, but it’s another in a string of pieces by various artists that explores just what it means to be human at a time when machines are blurring the line between them and us. The hardware is straightforward: a silicone rubber representation of a human nasopharyngeal cavity, servos for moving the lips, and a speaker to create the vocals. Those are generated by a machine-learning algorithm that was trained against the sacred texts of many of the world’s major religions, including the Christian Bible, the Koran, the Baghavad Gita, Taoist texts, and the Book of Mormon. The algorithm analyzes the structure of sacred verses and recreates random prayers and hymns using Amazon Polly that sound a lot like the real thing. That the lips move in synchrony with the ersatz devotions only adds to the otherworldliness of the piece. Watch it in action below.

We’ve featured several AI-based projects that poke at some interesting questions. This kinetic sculpture that uses machine learning to achieve balance comes to mind, while AI has even been employed in the search for spirits from the other side.

Continue reading “Silicone And AI Power This Prayerful Robotic Intercessor”

Train All The Things Contest Update

Back in January when we announced the Train All the Things contest, we weren’t sure what kind of entries we’d see. Machine learning is a huge and rapidly evolving field, after all, and the traditional barriers that computationally intensive processes face have been falling just as rapidly. Constraints are fading away, and we want you to explore this wild new world and show us what you come up with.

Where Do You Run Your Algorithms?

To give your effort a little structure, we’ve come up with four broad categories:

  • Machine Learning on the Edge
    • Edge computing, where systems reach out to cloud resources but run locally, is all the rage. It allows you to leverage the power of other people’s computers the cloud for training a model, which is then executed locally. Edge computing is a great way to keep your data local.
  • Machine Learning on the Gateway
    • Pi’s, old routers, what-have-yous – we’ve all got a bunch of devices laying around that bridge space between your local world and the cloud. What can you come up with that takes advantage of this unique computing environment?
  • Machine Learning in the Cloud
    • Forget about subtle — this category unleashes the power of the cloud for your application. Whether it’s Google, Azure, or AWS, show us what you can do with all that raw horsepower at your disposal.
  • Artificial Intelligence Blinky
    • Everyone’s “hardware ‘Hello, world'” is blinking an LED, and this is the machine learning version of that. We want you to use a simple microprocessor to run a machine learning algorithm. Amaze us with what you can make an Arduino do.

These Hackers Trained Their Projects, You Should Too!

We’re a little more than a month into the contest. We’ve seen some interesting entries bit of course we’re hungry for more! Here are a few that have caught our eye so far:

  • Intelligent Bat Detector – [Tegwyn☠Twmffat] has bats in his… backyard, so he built this Jetson Nano-powered device to capture their calls and classify them by species. It’s a fascinating adventure at the intersection of biology and machine learning.
  • Blackjack Robot – RAIN MAN 2.0 is [Evan Juras]’ cure for the casino adage of “The house always wins.” We wouldn’t try taking the Raspberry Pi card counter to Vegas, but it’s a great example of what YOLO can do.
  • AI-enabled Glasses – AI meets AR in ShAIdes, [Nick Bild]’s sunglasses equipped with a camera and Nano to provide a user interface to the world. Wave your hand over a lamp and it turns off. Brilliant!

You’ve got till noon Pacific time on April 7, 2020 to get your entry in, and four winners from each of the four categories will be awarded a $100 Tindie gift card, courtesy of our sponsor Digi-Key. It’s time to ramp up your machine learning efforts and get a project entered! We’d love to see more examples of straight cloud AI applications, and the AI blinky category remains wide open at this point. Get in there and give machine learning a try!

Machine Learning System Uses Images To Teach Itself Morse Code

Conventional wisdom holds that the best way to learn a new language is immersion: just throw someone into a situation where they have no choice, and they’ll learn by context. Militaries use immersion language instruction, as do diplomats and journalists, and apparently computers can now use it to teach themselves Morse code.

The blog entry by the delightfully callsigned [Mauri Niininen (AG1LE)] reads like a scientific paper, with good reason: [Mauri] really seems to know a thing or two about machine learning. His method uses curated training data to build a model, namely Morse snippets and their translations, as is the usual approach with such systems. But things take an unexpected turn right from the start, as [Mauri] uses a Tensorflow handwriting recognition implementation to train his model.

Using a few lines of Python, he converts short, known snippets of Morse to a grayscale image that looks a little like a barcode, with the light areas being the dits and dahs and the dark bars being silence. The first training run only resulted in about 36% accuracy, but a subsequent run with shorter snippets ended up being 99.5% accurate. The model was also able to pull Morse out of a signal with -6 dB signal-to-noise ratio, even though it had been trained with a much cleaner signal.

Other Morse decoders use lookup tables to convert sound to text, but it’s important to note that this one doesn’t. By comparing patterns to labels in the training data, it inferred what the characters mean, and essentially taught itself Morse code in about an hour. We find that fascinating, and wonder what other applications this would be good for.

Thanks to [Gordon Shephard] for the tip.

New Contest: Train All The Things

The old way was to write clever code that could handle every possible outcome. But what if you don’t know exactly what your inputs will look like, or just need a faster route to the final results? The answer is Machine Learning, and we want you to give it a try during the Train All the Things contest!

It’s hard to find a more buzz-worthy term than Artificial Intelligence. Right now, where the rubber hits the road in AI is Machine Learning and it’s never been so easy to get your feet wet in this realm.

From an 8-bit microcontroller to common single-board computers, you can do cool things like object recognition or color classification quite easily. Grab a beefier processor, dedicated ASIC, or lean heavily into the power of the cloud and you can do much more, like facial identification and gesture recognition. But the sky’s the limit. A big part of this contest is that we want everyone to get inspired by what you manage to pull off.

Yes, We Do Want to See Your ML “Hello World” Too!

Wait, wait, come back here. Have we already scared you off? Don’t read AI or ML and assume it’s not for you. We’ve included a category for “Artificial Intelligence Blinky” — your first attempt at doing something cool.

Need something simple to get you excited? How about Machine Learning on an ATtiny85 to sort Skittles candy by color? That uses just one color sensor for a quick and easy way to harvest data that forms a training set. But you could also climb up the ladder just a bit and make yourself a camera-based LEGO sorter or using an IMU in a magic wand to detect which spell you’re casting. Need more scientific inspiration? We’re hoping someday someone will build a training set that classifies microscope shots of micrometeorites. But we’d be equally excited with projects that tackle robot locomotion, natural language, and all the other wild ideas you can come up with.

Our guess is you don’t really need prizes to get excited about this one… most people have been itching for a reason to try out machine learning for quite some time. But we do have $100 Tindie gift certificates for the most interesting entry in each of the four contest categories: ML on the edge, ML on the gateway, AI blinky, and ML in the cloud.

Get started on your entry. The Train All The Things contest is sponsored by Digi-Key and runs until April 7th.

Generating Beetles From Public Domain Images

Ever since [Ian Goodfellow] and his colleagues invented the generative adversarial network (GAN) in 2014, hundreds of projects, from style transfers to poetry generators, have been produced using the concept of contesting neural networks. Unlike traditional neural networks, GANs can generate new data that fits statistically within the same set as the training set.

[Bernat Cuni], the one-man design team behind [cunicode] came up with the idea to generate beetles using this technique. Inspired by material published on Machine Learning for Artists, he decided to deploy some visual experiments with zoological illustrations. The training data was found from a public domain book hosted at archive.org, found through the Biodiversity Heritage Library. A combination of OpenCV and ImageMagick helped with individually extracting illustrations to squared images.

[Cuni] then ran a DCGAN with the data set, generating the first set of quasi-beetles after some tinkering with epochs and settings. After the failed first experiment, he went with StyleGAN, setting up a machine at PaperSpace with 1 GPU and running the training for >3 days on 128 px images. The results were much better, but fairly small and the cost of running the machine was quite expensive (>€125).

Given the success of the previous experiment, he decided to transfer over to Google CoLab, using their 12 hours of K80 GPU per run for free to generate some more beetles. With the intent on producing more HD beetles, he used Runway trained on 1024 px beetles, discovering much better results after 3000 steps. The model was moved over to Google CoLab to produce HD outputs.

He has since continued to experiment with the beetles, producing some confusing generated images and fun collectibles.

Continue reading “Generating Beetles From Public Domain Images”

It Turns Out, Robots Need Tough Love Too

Showing robots adversarial behavior may be the key to improving their performance, according to a study conducted by the University of Southern California. While a generative adversarial network (GAN), where two neural networks compete in a game, has been demonstrated, this is the first time adversarial human users have been used in a learning effort.

The report was presented at the International Conference on Intelligent Robots and Systems, describing the experiment in which reinforcement learning was used to train robotic systems to create a general purpose system. For most robots, a huge amount of training data is necessary in order to manipulate objects in a human-like way.

A line of research that has been successful in overcoming this problem is having a “human in the loop”, in which a human provides feedback to the system in regards to its abilities. Most algorithms have assumed a cooperating human assistant, but by acting against the system the robot may be more inclined to develop robustness towards real world complexities.

The experiment that was conducted involved a robot attempting to grasp an object in a computer simulation. The human observer observes the simulated grasp and attempts to snatch the object away from the robot if the grasp is successful. This helps the robot discern weak and firm grasps, a crazy idea from the researchers that managed to work. The system trained with the adversary rejected unstable grasps, quickly learning robust grasps for different objects.

Experiments like these can test the assumptions made in the learning task for robotic applications, leading to better stress-tested systems more inclined to work in real-world situations. Take a look at the interview in the video below the break.

Continue reading “It Turns Out, Robots Need Tough Love Too”