Google Machine Learning Made Simple(r)

If you’ve looked at machine learning, you may have noticed that a lot of the examples are interesting but hard to follow. That’s why [Jostmey] created Naked Tensor, a bare-minimum example of using TensorFlow. The example is simple, just doing some straight line fits on some data points. One example shows how it is done in series, one in parallel, and another for an 8-million point dataset. All the code is in Python.

If you haven’t run into it yet, TensorFlow is an open source library from Google. To quote from its website:

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

Continue reading “Google Machine Learning Made Simple(r)”

Wearable Predicts Tone of Conversation from Speech, Vital Signs

If you’ve ever wondered how people are really feeling during a conversation, you’re not alone. By and large, we rely on a huge number of cues — body language, speech, eye contact, and a million others — to determine the feelings of others. It’s an inexact science to say the least. Now, researchers at MIT have developed a wearable system to analyze the tone of a conversation.

The system uses Samsung Simband wearables, which are capable of measuring several physiological markers — heart rate, blood pressure, blood flow, and skin temperature — as well as movement thanks to an on-board accelerometer. This data is fed into a neural network which was trained to classify a conversation as “happy” or “sad”. Training consisted of capturing 31 conversations of several minutes duration each, where participants were asked to tell a happy or sad story of their own choosing. This was done in an effort to record more organic emotional states than simply eliciting emotion through the use of more typical “happy” or “sad” video materials often used in similar studies.

The technology is in a very early stage of development, however the team hopes that down the road, the system will be sufficiently advanced to act as an emotional coach in real-life social situations. There is a certain strangeness about the idea of asking a computer to tell you how a person is feeling, but if humans are nothing more than a bag of wet chemicals, there might be merit in the idea yet. It’s a pretty big if.

Machine learning is becoming more powerful on a daily basis, particularly as we have ever greater amounts of computing power to throw behind it. Check out our primer on machine learning to get up to speed.

Continue reading “Wearable Predicts Tone of Conversation from Speech, Vital Signs”

Objectifier: Director of Domestic Technology

book-example[Bjørn Karmann]’s Objectifier is a device that lets you control domestic objects by allowing them to respond to unique actions or behaviour, using machine learning and computer vision. The Objectifier can turn on a table lamp when you open a book, and turn it off when you close the book. Switch on the coffee maker when you place the mug next to the pot, and switch it off when the mug is removed. Turn on the belt sander when you put on the safety glasses, and stop it when you remove the glasses. Charge the phone when you put a banana in front of it, and stop charging it when you place an apple in front of it. You get the drift — the possibilities are endless. Hopefully, sometime in the (near) future, we will be able to interact with inanimate objects in this fashion. We can get them to learn from our actions rather than us learning how to program them.

The device uses computer vision and a neural network to learn complex behaviours associated with your trigger commands. A training mode, using a phone app, allows you to train it for the On and Off actions. Some actions require more human effort in training it — such as detecting an open and closed book — but eventually, the neural network does a fairly good job.

The current version is the sixth prototype in the series and [Bjørn] has put in quite a lot of work refining the project at each stage. In its latest avatar, the device hardware consists of a Pi Zero, a Raspberry-Pi camera module, an SMPS power brick, a relay block to switch the output, a 230 V plug for input power and a 230 V socket outlet for the final output. All the parts are put together rather neatly using acrylic laser cut support pieces, and then further enclosed in a nice wooden enclosure.

On the software side, all of the machine learning part is taken care of using “Wekinator” — a free, open source software that allows building musical instruments, gestural game controllers, computer vision or computer listening systems using machine learning. The computer vision is handled via Processing. All the code is wrapped using openframeworks, with ml4A providing apps for working with machine learning.

All of the above is what we could deduce looking at the pictures and information on his blog post. There isn’t much detail about the hardware, but the pictures are enough to tell us all. The software isn’t made available, but maybe this could spur some of you hackers into action to build another version of the Objectifier. Check out the video after the break, showing humans teaching the Objectifier its tricks.

Continue reading “Objectifier: Director of Domestic Technology”

Hedgefund Startup Powered By Crowdsourced Code

In the financial sector, everyone is looking for a new way to get ahead. Since the invention of the personal computer, and perhaps even before, large financial institutions have been using software to guide all manner of investment decisions. The turn of the century saw the rise of High Frequency Trading, or HFT, in which highly optimized bots make millions of split-second  transactions a day.

Recently, [Wired] reported on Numerai — a hedge fund founded on big data and crowdsourcing principles. The basic premise is thus — Numerai takes its transaction data, encrypts it in a manner that hides its true nature from competitors but remains computable, and shares it with anyone who cares to look. Data scientists then crunch the numbers and suggest potential trading algorithms, and those whose algorithms succeed are rewarded with cold, hard Bitcoin.

Continue reading “Hedgefund Startup Powered By Crowdsourced Code”

Use Machine Learning To Identify Superheroes and Other Miscellany

[Massimiliano Patacchiola] writes this handy guide on using a histogram intersection algorithm to identify different objects. In this case, lego superheroes. All you need to follow along are eyes, Python, a computer, and a bit of machine learning magic.

He gives a good introduction to the idea. You take a histogram of the colors in a properly cropped and filtered photo of the object you want to identify. You then feed that into a neural network and train it to identify the different superheroes by color. When you feed it a new image later, it will compare the new image’s histogram to its model and output confidences as to which set it belongs.

This is a useful thing to know. While a lot of vision algorithms try to make geometric assertions about the things they see, adding color to the mix can certainly help your friendly robot project recognize friend from foe.

 

Train Your Robot To Walk with a Neural Network

[Basti] was playing around with Artificial Neural Networks (ANNs), and decided that a lot of the “hello world” type programs just weren’t zingy enough to instill his love for the networks in others. So he juiced it up a little bit by applying a reasonably simple ANN to teach a four-legged robot to walk (in German, translated here).

While we think it’s awesome that postal systems the world over have been machine sorting mail based on similar algorithms for years now, watching a squirming quartet of servos come to forward-moving consensus is more viscerally inspiring. Job well done! Check out the video embedded below.

Continue reading “Train Your Robot To Walk with a Neural Network”

Perceptrons in C++

Last time, I talked about a simple kind of neural net called a perceptron that you can cause to learn simple functions. For the purposes of experimenting, I coded a simple example using Excel. That’s handy for changing things on the fly, but not so handy for putting the code in a microcontroller. This time, I’ll show you how the code looks in C++ and also tell you more about what you can do when faced with a more complex problem.

Continue reading “Perceptrons in C++”