Drones Can Undertake Excavations Without Human Intervention

Researchers from Denmark’s Aarhus University have developed a method for autonomous drone scanning and measurement of terrains, allowing drones to independently navigate themselves over excavation grounds. The only human input is a starting location and the desired cliff face for scanning.

For researchers studying quarries, capturing data about gravel, walls, and other natural and man-made formations is important for understanding the properties of the terrain. Controlling the drones can be expensive though, since there’s considerable skill involved in manually flying the drone and keeping its camera steady and perpendicular to the wall it is capturing.

The process designed is a Gaussian model that predicts the wind encountered near the wall, estimating the strength based on the inputs it receives as it moves. It uses both nonlinear model predictive control (NMPC) and a PID controller in its feedback control system, which calculate the values to send to the drone’s motor controller. A long short-term memory (LSTM) model is used for calculating the predictions. It’s been successfully tested in a chalk quarry in Denmark and will continue to be tested as its algorithms are improved.

Getting a drone to hover and move between GPS waypoints is easy enough, but once they need to maneuver around obstacles it starts getting tricky. Research like this will be invaluable for developing systems that help drones navigate in areas where their human operators can’t reach.

[Thanks to Qes for the tip!]

Machine Learning System Uses Images To Teach Itself Morse Code

Conventional wisdom holds that the best way to learn a new language is immersion: just throw someone into a situation where they have no choice, and they’ll learn by context. Militaries use immersion language instruction, as do diplomats and journalists, and apparently computers can now use it to teach themselves Morse code.

The blog entry by the delightfully callsigned [Mauri Niininen (AG1LE)] reads like a scientific paper, with good reason: [Mauri] really seems to know a thing or two about machine learning. His method uses curated training data to build a model, namely Morse snippets and their translations, as is the usual approach with such systems. But things take an unexpected turn right from the start, as [Mauri] uses a Tensorflow handwriting recognition implementation to train his model.

Using a few lines of Python, he converts short, known snippets of Morse to a grayscale image that looks a little like a barcode, with the light areas being the dits and dahs and the dark bars being silence. The first training run only resulted in about 36% accuracy, but a subsequent run with shorter snippets ended up being 99.5% accurate. The model was also able to pull Morse out of a signal with -6 dB signal-to-noise ratio, even though it had been trained with a much cleaner signal.

Other Morse decoders use lookup tables to convert sound to text, but it’s important to note that this one doesn’t. By comparing patterns to labels in the training data, it inferred what the characters mean, and essentially taught itself Morse code in about an hour. We find that fascinating, and wonder what other applications this would be good for.

Thanks to [Gordon Shephard] for the tip.

Artificial Intelligence Composes New Christmas Songs

One of the most common uses of neural networks is the generation of new content, given certain constraints. A neural network is created, then trained on source content – ideally with as much reference material as possible. Then, the model is asked to generate original content in the same vein. This generally has mixed, but occasionally amusing, results. The team at [Made by AI] had a go at generating Christmas songs using this very technique.

The team decided that the easiest way to train their model would be to use note data from MIDI files. MIDI versions of Christmas songs are readily available and provide a broad base with which to train the model. For a neural network, the team chose to use a Long-short Term Memory (LSTM) architecture. This is a model which is contextually sensitive, which is important when dealing with structured formats like music or language.

The neural network generated five tunes which you can listen to on the Made by AI Soundcloud page. The team notes their time was limited, and we think that with some further work and more adherence to musical concepts such as structure and repetition, it might be possible to generate something a little more catchy.

There are other applications for AI in music, too – like these intelligent musical prostheses.