Sara Adkins Is Jamming Out With Machines

Asking machines to make music by themselves is kind of a strange notion. They’re machines, after all. They don’t feel happy or hurt, and as far as we know, they don’t long for the affections of other machines. Humans like to think of music as being a strictly human thing, a passionate undertaking so nuanced and emotion-based that a machine could never begin to understand the feeling that goes into the process of making music, or even the simple enjoyment of it.

The idea of humans and machines having a jam session together is even stranger. But oddly enough, the principles of the jam session may be exactly what machines need to begin to understand musical expression. As Sara Adkins explains in her enlightening 2019 Hackaday Superconference talk, Creating with the Machine, humans and machines have a lot to learn from each other.

To a human musician, a machine’s speed and accuracy are enviable. So is its ability to make instant transitions between notes and chords. Humans are slow to learn these transitions and have to practice going back and forth repeatedly to build muscle memory. If the machine were capable, it would likely envy the human in terms of passionate performance and musical expression.

Continue reading “Sara Adkins Is Jamming Out With Machines”

Self-Driving Cars Are Predicting Driving Personalities

In a recent study by a team of researchers at MIT, self driving cars are being programmed to identify the social personalities of other drivers in an effort to predict their future actions and drive safer on roads.

It’s already been made evident that autonomous vehicles lack social awareness. Drivers around a car are regarded as obstacles rather than human beings, which can hinder the automata’s ability to identify motivations and intentions, potential signifiers to future actions. Because of this, self-driving cars often cause bottlenecks at four-way stops and other intersections, perhaps explaining why the majority of traffic accidents involve them getting rear-ended by impatient drivers.

The research taps into social value orientation, a concept from social psychology that classifies a person from selfish (“egoistic”) to altruistic and cooperative (“prosocial”). The system uses this classification to create real-time driving trajectories for other cars based on a small snippet of their motion. For instance, cars that merge more often are deemed as more competitive than other cars.

When testing the algorithms on tasks involving merging lanes and making unprotected left turns, the behavioral predictions were shown to improve by a factor of 25%. In a left-turn simulation, the automata was able to wait until the approaching car had a more prosocial driver.

Even outside of self-driving cars, the research could help human drivers predict the actions of other drivers around them.

Thanks [Qes] for the tip!

DeepPCB Routes Your KiCAD PCBs

Computers can write poetry, even if they can’t necessarily write good poetry. The same can be said of routing PC boards. Computers can do it, but can they do it well? Of course, there are multiple tools each with pluses and minuses. However, a slick web page recently announced deeppcb.ai — a cloud-based AI router — and although details are sparse, there are a few interesting things about the product.

First, it supports KiCAD. You provide a DSN file, and within 24 hours you get a routed SES file. Maybe. You get three or four free boards –apparently each week — after which there is some undisclosed fee. Should you just want to try it out, create an account (which is quick and free — just verify your e-mail and create a password). Then in the “Your Boards” section there are a few examples already worked out.

Continue reading “DeepPCB Routes Your KiCAD PCBs”

How To Run ML Applications On Particle Hardware

With the release of TensorFlow Lite at Google I/O 2019, the accessible machine learning library is no longer limited to applications with access to GPUs. You can now run machine learning algorithms on microcontrollers much more easily, improving on-board inference and computation.

[Brandon Satrom] published a demo on how to run TFLite on Particle devices (tested on Photon, Argon, Boron,  and Xenon) making it possible to make predictions on live data with pre-trained models. While some of the easier computation that occurs on MCUs requires manipulating data with existing equations (mapping analog inputs to a percentage range, for instance), many applications require understanding large, complex sets of sensor data gathered in real time. It’s often more difficult to get accurate results from a simple equation.

The current method is to train ML models on specialty hardware, deploy the models on cloud infrastructure, and backhaul sensor data to the cloud for inference. By running the inference and decision-making on-board, MCUs can simply take action without backhauling any data.

He starts off by constructing a simple TGLite model for MCU execution, using mean squared error for loss and stochastic gradient descent for the optimization. After training the model on sample data, you can save the model and convert it to a C array for the MCU. On the MCU, you can load the model, TFLite libraries, and operations resolver, as well as instantiate an interpreter and tensors. From there you invoke the model on the MCU and see your results!

[Thanks dcschelt for the tip!]

AI Phone App Learns Baseball Signals

Watching a sport can be a bit odd if you aren’t familiar with it. Most Americans, for example, would think a cricket match looked funny because they don’t know the rules. If you were not familiar with baseball, you might wonder why one of the coaches was waving his hands around, touching his nose, his ears, and his hat seemingly at random. Those in the know however understand that this is a secret signal to the player. The coach might be telling the player to steal a base or bunt. The other team tries to decode the signals, but if you don’t know the code that is notoriously difficult. Unless you have the machine learning phone app you can see in the video below.

If you are not a baseball fan, it works like this. The coach will do a number of things. Perhaps touch his cap, then his nose, brush his left forearm, and touch his lips. However, the code is often as simple as knowing one attention signal and one action signal. For example, the coach might tell you that if they touch their nose and then their lips, you should steal. Touching their nose and then their ear is a bunt. Touching their nose and then the bill of their cap is something else. Anything they do that doesn’t start with touching their nose means nothing at all. If the signal is this easy, you really don’t even need machine learning to decode it. But if it were more complicated — say, the gesture that occurs third after they touch their nose unless they also kick dirt at which point it means nothing — it would be much harder for a human to figure out.

Continue reading “AI Phone App Learns Baseball Signals”

Training Bats In The Random Forest With The Confusion Matrix

When exploring the realm of Machine Learning, it’s always nice to have some real and interesting data to work with. That’s where the bats come in – they’re fascinating animals that emit very particular ultrasonic calls that can be recorded and analysed with computer software to get a fairly good idea of what species they are. When analysed with an FFT spectogram, we can see the individual call shapes very clearly.

Creating an open source classifier for bats is also potentially useful for the world outside of Machine Learning as it could not only enable us to more easily monitor bats themselves, but also the knock on effects of modern farming methods on the natural environment. Bats feed on moths and other night flying insects which themselves have been decimated in numbers. Even in the depths of the countryside here in the UK these insects are a fraction of the population that they used to be 30 years ago, but nobody seems to have monitored this decline.

So getting back to our spectograms, it would be perfectly reasonable to throw these images at a convolutional neural network (CNN) and use an image feature-recognition strategy. But I wanted to explore the depths of the mysterious Random Forest. Continue reading “Training Bats In The Random Forest With The Confusion Matrix”

Colorizing Images With The Help Of AI

The world was never black and white – we simply lacked the technology to capture it in full color. Many have experimented with techniques to take black and white images, and colorize them. [Adrian Rosebrock] decided to put an AI on the job, with impressive results.

The method involves training a Convolutional Neural Network (CNN) on a large batch of photos, which have been converted to the Lab colorspace. In this colorspace, images are made up of 3 channels – lightness, a (red-green), and b (blue-yellow). This colorspace is used as it better corresponds to the nature of the human visual system than RGB. The model is then trained such that when given a lightness channel as an input, it can predict the likely a & b channels. These can then be recombined into a colorized image, and converted back to RGB for human consumption.

It’s a technique capable of doing a decent job on a wide variety of material. Things such as grass, countryside, and ocean are particularly well dealt with, however more complicated scenes can suffer from some aberration. Regardless, it’s a useful technique, and far less tedious than manual methods.

CNNs are doing other great things too, from naming tomatoes to helping out with home automation. Video after the break.

Continue reading “Colorizing Images With The Help Of AI”