Machine Learning In The Kitchen Makes For Tasty Mashup Desserts

What did you do during lockdown? A whole lot of people turned to baking in between trips to the store to search for toilet paper and hand sanitizer. Many of them baked bread for some reason, but like us, [Sara Robinson] turned to sweeter stuff to get through it.

The first Cakie ever made. Image via Google Cloud

Her pandemic ponderings wandered into the realm of baking existentialist questions, like what separates baked goods from each other, categorically speaking? What is the science behind the crunchiness of cookies, the sponginess of cake, and the fluffiness of bread?

As a developer advocate for Google Cloud, [Sara] turned to machine learning to figure out why the cookie crumbles. She collected 33 recipes each of cookies, cake, and bread and built a TensorFlow model to analyze them, which resulted in a cookie/cake/bread lineage for each recipe in a set of percentages. Not only was the model able to accurately classify recipes by type, [Sara] was able to use the model to come up with a 50/50 cookie-cake hybrid recipe. The AI delivered a list of ingredients to which she added vanilla extract and chocolate chips for flavor. From there, she had to wing it and come up with her own baking directions for the Cakie.

Continue reading “Machine Learning In The Kitchen Makes For Tasty Mashup Desserts”

Hide And Seek AI Shows Emergent Tool Use

Machine learning has come a long way in the last decade, as it turned out throwing huge wads of computing power at piles of linear algebra actually turned out to make creating artificial intelligence relatively easy. OpenAI have been working in the field for a while now, and recently observed some exciting behaviour in a hide-and-seek game they built.

The game itself is simple; two teams of AI bots play a game of hide-and-seek, with the red bots being rewarded for spotting the blue ones, and the blue ones being rewarded for avoiding their gaze. Initially, nothing of note happens, but as the bots randomly run around, they slowly learn. Over millions of trials, the seekers first learn to find the hiders, while the hiders respond by building barriers to hide behind. The seekers then learn to use ramps to loft over them, while the blue bots learn to bend the game’s physics and throw them out of the playfield. It ends with the seekers learning to skate around on blocks and the hiders building tight little barriers. It’s a continual arms race of techniques between the two sides, organically developed as the bots play against each other over time.

It’s a great study, and particularly interesting to note how much longer it takes behaviours to develop when the team switches from a basic fixed scenario to an changable world with more variables. We’ve seen other interesting gaming efforts with machine learning, too – like teaching an AI to play Trackmania. Video after the break.

Continue reading “Hide And Seek AI Shows Emergent Tool Use”

AI Learns To Drive Trackmania

Machine learning has long been a topic of interest for humanity, but only in recent years have we had broad access to great computing power to enable to the average person to dive in. [Yosh] recently decided to put an AI to work learning how to race in Trackmania.

After early experiments with supervised learning, [Yosh] decided to implement a genetic algorithm to produce an AI to drive in the game. The AI takes distance from the track walls as an input, and has steering and accelerator values as an output. Starting with 100 AIs in generation 1, [Yosh] iterated by choosing the AIs that covered the longest distance in 13 seconds. Once the AIs started to get the hang of the first few corners, he changed the training to instead prioritize the lowest time taken to traverse each of the checkpoints along the track.

The AI improved over time, and over 100 generations, got down to a 23.48s time on the test track, versus 19.63s for [Trabadia], a talented human. We’d love to see how much better the AI could do with more training. [Yosh] is trying more experiments, like providing extra feedback in the AI fitness function to keep it from hitting the walls. It’s not the first time we’ve seen a genetic algorithm used to train a racing AI, either. Video after the break.

Continue reading “AI Learns To Drive Trackmania”

A Gesture Recognizing Armband

Gesture recognition usually involves some sort of optical system watching your hands, but researchers at UC Berkeley took a different approach. Instead they are monitoring the electrical signals in the forearm that control the muscles, and creating a machine learning model to recognize hand gestures.

The sensor system is a flexible PET armband with 64 electrodes screen printed onto it in silver conductive ink, attached to a standalone AI processing module.  Since everyone’s arm is slightly different, the system needs to be trained for a specific user, but that also means that the specific electrical signals don’t have to be isolated as it learns to recognize patterns.

The challenging part of this is that the patterns don’t remain constant over time, and will change depending on factors such as sweat, arm position,  and even just biological changes. To deal with this the model can update itself on the device over time as the signal changes. Another part of this research that we appreciate is that all the inferencing, training, and updating happens locally on the AI chip in the armband. There is no need to send data to an external device or the “cloud” for processing, updating, or third-party data mining. Unfortunately the research paper with all the details is behind a paywall.

Continue reading “A Gesture Recognizing Armband”

Giving Blind Runners Independence With AI

Being able to see, move, and exercise independently is something most of us take for granted. [Thomas Panek] was an avid runner before losing his sight due to a genetic condition, and had to rely on other humans and guide dogs to run again. After challenging attendants at a Google hackathon, Project Guideline was established to give blind runners (or walkers) independence from a cane, dog or another human, while exercising outdoors. Using a smartphone with line following AI software, and bone conduction headphones, users can be guided along a path with a line painted on it. You need to watch the video below to get a taste of just how incredible it is for the users.

Getting a wheeled robot to follow a line is relatively simple, but a running human is by no means a stable sensor platform. At the previously mentioned hackathon, developers put together a rough proof of concept with a smartphone, using its camera to recognize a painted line on the ground and provide left/right audio cues.  As the project developed, the smartphone was attached to a waist belt and bone conduction headphones were used,  which don’t affect audio situational awareness as much as normal headphones.

The shaking and side to side movement of running, and varying light conditions and visual obstructions in the outdoors made the problem more difficult to solve, but within a year the developers had completed successful running tests with [Thomas] on a well-lit indoor track and an outdoor pedestrian path with a temporary line. For the first time in 25 years, [Thomas] was able to run independently.

While guide dogs have proven effective for both daily life and running, they cost approximately $60000 over an average working life of 8 years, putting them out of reach of many sight-impaired people around the world. Project Guideline is still in the early stages, and real-world problems like obstacles and traffic still need to be addressed, but there is massive potential.

Continue reading “Giving Blind Runners Independence With AI”

Hackaday Links Column Banner

Hackaday Links: December 6, 2020

By now you’ve no doubt heard of the sudden but not unexpected demise of the iconic Arecibo radio telescope in Puerto Rico. We have been covering the agonizing end of Arecibo from almost the moment the first cable broke in August to a eulogy, and most recently its final catastrophic collapse this week. That last article contained amazing video of the final collapse, including up-close and personal drone shots of the cable breaking. For a more in-depth analysis of the collapse, it’s hard to beat Scott Manley’s frame-by-frame analysis, which really goes into detail about what happened. Seeing the paint spalling off the cables as they stretch and distort under loads far greater than they were designed for is both terrifying and fascinating.

Exciting news from Australia as the sample return capsule from JAXA’s Hayabusa2 asteroid explorer returned safely to Earth Saturday. We covered Hayabusa2 in our roundup of extraterrestrial excavations a while back, describing how it used both a tantalum bullet and a shaped-charge penetrator to blast regolith from the surface of asteroid 162173 Ryugu. Samples of the debris were hoovered up and hermetically sealed for the long ride back to Earth, which culminated in the fiery re-entry and safe landing in the midst of the Australian outback. Planetary scientists are no doubt eager to get a look inside the capsule and analyze the precious milligrams of space dust. In the meantime, Hayabusa2, with 66 kilograms of propellant remaining, is off on an extended mission to visit more asteroids for the next eleven years or so.

The 2020 Remoticon has been wrapped up for most of a month now, but one thing we noticed was how much everyone seemed to like the Friday evening Bring-a-Hack event that was hosted on Remo. To kind of keep that meetup momentum going and to help everyone slide into the holiday season with a little more cheer, we’re putting together a “Holiday with Hackaday & Tindie” meetup on Tuesday, December 15 at noon Pacific time. The details haven’t been shared yet, but our guess is that this will certainly be a “bring-a-hack friendly” event. We’ll share more details when we get them this week, but for now, hop over to the Remo event page and reserve your spot.

On the Buzzword Bingo scorecard, “Artificial Intelligence” is a square that can almost be checked off by default these days, as companies rush to stretch the definition of the term to fit almost every product in the neverending search for market share. But even those products that actually have machine learning built into them are only as good as the data sets used to train them. That can be a problem for voice-recognition systems; while there are massive databases of utterances in just about every language, the likes of Amazon and Google aren’t too willing to share what they’ve leveraged from their smart speaker using customer base. What’s the little person to do? Perhaps the People’s Speech database will help. Part of the MLCommons project, it has 86,000 hours of speech data, mostly derived from audiobooks, a clever source indeed since the speech and the text can be easily aligned. The database also pulls audio and the corresponding text from Wikipedia and other random sources around the web. It’s a small dataset, to be sure, but it’s a start.

And finally, divers in the Baltic Sea have dredged up a bit of treasure: a Nazi Enigma machine. Divers in Gelting Bay near the border of Germany and Denmark found what appeared to be an old typewriter caught in one of the abandoned fishing nets they were searching for. When they realized what it was — even crusted in 80-years-worth of corrosion and muck some keys still look like they’re brand new — they called in archaeologists to take over recovery. Gelting Bay was the scene of a mass scuttling of U-boats in the final days of World War II, so this Engima may have been pitched overboard before by a Nazi commander before pulling the plug on his boat. It’ll take years to restore, but it’ll be quite a museum piece when it’s done.

Remoticon Video: How To Use Machine Learning With Microcontrollers

Going from a microcontroller blinking an LED, to one that blinks the LED using voice commands based on a data set that you trained on a neural net work is a “now draw the rest of the owl” problem. Lucky for us, Shawn Hymel walks us through the entire process during his Tiny ML workshop from the 2020 Hackaday Remoticon. The video has just now been published and can be viewed below.

This is truly an end-to-end Hello World for getting machine learning up and running on a microcontroller. Shawn covers the process of collecting and preparing the audio samples, training the data set, and getting it all onto the microcontroller. At the end of two hours, he’s able to show the STM32 recognizing and responding to two different spoken words. Along the way he pauses to discuss the context of what’s happening in every step, which will help you go back and expand in those areas later to suit your own project needs.

Continue reading “Remoticon Video: How To Use Machine Learning With Microcontrollers”