Neural Networks: You’ve Got It So Easy

Neural networks are all the rage right now with increasing numbers of hackers, students, researchers, and businesses getting involved. The last resurgence was in the 80s and 90s, when there was little or no World Wide Web and few neural network tools. The current resurgence started around 2006. From a hacker’s perspective, what tools and other resources were available back then, what’s available now, and what should we expect for the future? For myself, a GPU on the Raspberry Pi would be nice.

Continue reading “Neural Networks: You’ve Got It So Easy”

Introduction To TensorFlow

I had great fun writing neural network software in the 90s, and I have been anxious to try creating some using TensorFlow.

Google’s machine intelligence framework is the new hotness right now. And when TensorFlow became installable on the Raspberry Pi, working with it became very easy to do. In a short time I made a neural network that counts in binary. So I thought I’d pass on what I’ve learned so far. Hopefully this makes it easier for anyone else who wants to try it, or for anyone who just wants some insight into neural networks.

Continue reading “Introduction To TensorFlow”

Google Machine Learning Made Simple(r)

If you’ve looked at machine learning, you may have noticed that a lot of the examples are interesting but hard to follow. That’s why [Jostmey] created Naked Tensor, a bare-minimum example of using TensorFlow. The example is simple, just doing some straight line fits on some data points. One example shows how it is done in series, one in parallel, and another for an 8-million point dataset. All the code is in Python.

If you haven’t run into it yet, TensorFlow is an open source library from Google. To quote from its website:

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

Continue reading “Google Machine Learning Made Simple(r)”

Wearable Predicts Tone of Conversation from Speech, Vital Signs

If you’ve ever wondered how people are really feeling during a conversation, you’re not alone. By and large, we rely on a huge number of cues — body language, speech, eye contact, and a million others — to determine the feelings of others. It’s an inexact science to say the least. Now, researchers at MIT have developed a wearable system to analyze the tone of a conversation.

The system uses Samsung Simband wearables, which are capable of measuring several physiological markers — heart rate, blood pressure, blood flow, and skin temperature — as well as movement thanks to an on-board accelerometer. This data is fed into a neural network which was trained to classify a conversation as “happy” or “sad”. Training consisted of capturing 31 conversations of several minutes duration each, where participants were asked to tell a happy or sad story of their own choosing. This was done in an effort to record more organic emotional states than simply eliciting emotion through the use of more typical “happy” or “sad” video materials often used in similar studies.

The technology is in a very early stage of development, however the team hopes that down the road, the system will be sufficiently advanced to act as an emotional coach in real-life social situations. There is a certain strangeness about the idea of asking a computer to tell you how a person is feeling, but if humans are nothing more than a bag of wet chemicals, there might be merit in the idea yet. It’s a pretty big if.

Machine learning is becoming more powerful on a daily basis, particularly as we have ever greater amounts of computing power to throw behind it. Check out our primer on machine learning to get up to speed.

Continue reading “Wearable Predicts Tone of Conversation from Speech, Vital Signs”

Objectifier: Director of Domestic Technology

book-example[Bjørn Karmann]’s Objectifier is a device that lets you control domestic objects by allowing them to respond to unique actions or behaviour, using machine learning and computer vision. The Objectifier can turn on a table lamp when you open a book, and turn it off when you close the book. Switch on the coffee maker when you place the mug next to the pot, and switch it off when the mug is removed. Turn on the belt sander when you put on the safety glasses, and stop it when you remove the glasses. Charge the phone when you put a banana in front of it, and stop charging it when you place an apple in front of it. You get the drift — the possibilities are endless. Hopefully, sometime in the (near) future, we will be able to interact with inanimate objects in this fashion. We can get them to learn from our actions rather than us learning how to program them.

The device uses computer vision and a neural network to learn complex behaviours associated with your trigger commands. A training mode, using a phone app, allows you to train it for the On and Off actions. Some actions require more human effort in training it — such as detecting an open and closed book — but eventually, the neural network does a fairly good job.

The current version is the sixth prototype in the series and [Bjørn] has put in quite a lot of work refining the project at each stage. In its latest avatar, the device hardware consists of a Pi Zero, a Raspberry-Pi camera module, an SMPS power brick, a relay block to switch the output, a 230 V plug for input power and a 230 V socket outlet for the final output. All the parts are put together rather neatly using acrylic laser cut support pieces, and then further enclosed in a nice wooden enclosure.

On the software side, all of the machine learning part is taken care of using “Wekinator” — a free, open source software that allows building musical instruments, gestural game controllers, computer vision or computer listening systems using machine learning. The computer vision is handled via Processing. All the code is wrapped using openframeworks, with ml4A providing apps for working with machine learning.

All of the above is what we could deduce looking at the pictures and information on his blog post. There isn’t much detail about the hardware, but the pictures are enough to tell us all. The software isn’t made available, but maybe this could spur some of you hackers into action to build another version of the Objectifier. Check out the video after the break, showing humans teaching the Objectifier its tricks.

Continue reading “Objectifier: Director of Domestic Technology”

Hedgefund Startup Powered By Crowdsourced Code

In the financial sector, everyone is looking for a new way to get ahead. Since the invention of the personal computer, and perhaps even before, large financial institutions have been using software to guide all manner of investment decisions. The turn of the century saw the rise of High Frequency Trading, or HFT, in which highly optimized bots make millions of split-second  transactions a day.

Recently, [Wired] reported on Numerai — a hedge fund founded on big data and crowdsourcing principles. The basic premise is thus — Numerai takes its transaction data, encrypts it in a manner that hides its true nature from competitors but remains computable, and shares it with anyone who cares to look. Data scientists then crunch the numbers and suggest potential trading algorithms, and those whose algorithms succeed are rewarded with cold, hard Bitcoin.

Continue reading “Hedgefund Startup Powered By Crowdsourced Code”

Use Machine Learning To Identify Superheroes and Other Miscellany

[Massimiliano Patacchiola] writes this handy guide on using a histogram intersection algorithm to identify different objects. In this case, lego superheroes. All you need to follow along are eyes, Python, a computer, and a bit of machine learning magic.

He gives a good introduction to the idea. You take a histogram of the colors in a properly cropped and filtered photo of the object you want to identify. You then feed that into a neural network and train it to identify the different superheroes by color. When you feed it a new image later, it will compare the new image’s histogram to its model and output confidences as to which set it belongs.

This is a useful thing to know. While a lot of vision algorithms try to make geometric assertions about the things they see, adding color to the mix can certainly help your friendly robot project recognize friend from foe.