Hedgefund Startup Powered By Crowdsourced Code

In the financial sector, everyone is looking for a new way to get ahead. Since the invention of the personal computer, and perhaps even before, large financial institutions have been using software to guide all manner of investment decisions. The turn of the century saw the rise of High Frequency Trading, or HFT, in which highly optimized bots make millions of split-second  transactions a day.

Recently, [Wired] reported on Numerai — a hedge fund founded on big data and crowdsourcing principles. The basic premise is thus — Numerai takes its transaction data, encrypts it in a manner that hides its true nature from competitors but remains computable, and shares it with anyone who cares to look. Data scientists then crunch the numbers and suggest potential trading algorithms, and those whose algorithms succeed are rewarded with cold, hard Bitcoin.

Continue reading “Hedgefund Startup Powered By Crowdsourced Code”

Use Machine Learning To Identify Superheroes and Other Miscellany

[Massimiliano Patacchiola] writes this handy guide on using a histogram intersection algorithm to identify different objects. In this case, lego superheroes. All you need to follow along are eyes, Python, a computer, and a bit of machine learning magic.

He gives a good introduction to the idea. You take a histogram of the colors in a properly cropped and filtered photo of the object you want to identify. You then feed that into a neural network and train it to identify the different superheroes by color. When you feed it a new image later, it will compare the new image’s histogram to its model and output confidences as to which set it belongs.

This is a useful thing to know. While a lot of vision algorithms try to make geometric assertions about the things they see, adding color to the mix can certainly help your friendly robot project recognize friend from foe.

 

Train Your Robot To Walk with a Neural Network

[Basti] was playing around with Artificial Neural Networks (ANNs), and decided that a lot of the “hello world” type programs just weren’t zingy enough to instill his love for the networks in others. So he juiced it up a little bit by applying a reasonably simple ANN to teach a four-legged robot to walk (in German, translated here).

While we think it’s awesome that postal systems the world over have been machine sorting mail based on similar algorithms for years now, watching a squirming quartet of servos come to forward-moving consensus is more viscerally inspiring. Job well done! Check out the video embedded below.

Continue reading “Train Your Robot To Walk with a Neural Network”

Perceptrons in C++

Last time, I talked about a simple kind of neural net called a perceptron that you can cause to learn simple functions. For the purposes of experimenting, I coded a simple example using Excel. That’s handy for changing things on the fly, but not so handy for putting the code in a microcontroller. This time, I’ll show you how the code looks in C++ and also tell you more about what you can do when faced with a more complex problem.

Continue reading “Perceptrons in C++”

Machine Learning: Foundations

When you want a person to do something, you train them. When you want a computer to do something, you program it. However, there are ways to make computers learn, at least in some situations. One technique that makes this possible is the perceptron learning algorithm. A perceptron is a computer simulation of a nerve, and there are various ways to change the perceptron’s behavior based on either example data or a method to determine how good (or bad) some outcome is.

What’s a Perceptron?

I’m no biologist, but apparently a neuron has a bunch of inputs and if the level of those inputs gets to a certain level, the neuron “fires” which means it stimulates the input of another neuron further down the line. Not all inputs are created equally: in the mathematical model of them, they have different weighting. Input A might be on a hair trigger, while it might take inputs B and C on together to wake up the neuron in question.
Continue reading “Machine Learning: Foundations”

Self-Driving R/C Car Uses An Intel NUC

Self-driving cars are something we are continually told will be the Next Big Thing. It’s nothing new, we’ve seen several decades of periodic demonstrations of the technology as it has evolved. Now we have real prototype cars on real roads rather than test tracks, and though they are billion-dollar research vehicles from organisations with deep pockets and a long view it is starting to seem that this is a technology we have a real chance of seeing at a consumer level.

A self-driving car may seem as though it is beyond the abilities of a Hackaday reader, but while it might be difficult to produce safe collision avoidance of a full-sized car on public roads it’s certainly not impossible to produce something with a little more modest capabilities. [Jaimyn Mayer] and [Kendrick Tan] have done just that, creating a self-driving R/C car that can follow a complex road pattern without human intervention.

The NUC's-eye view. The green line is a human's steering, the blue line the computed steering.
The NUC’s-eye view. The green line is a human’s steering, the blue line the computed steering.

Unexpectedly they have eschewed the many ARM-based boards as the brains of the unit, instead going for an Intel NUC mini-PC powered by a Core i5 as the brains of the unit. It’s powered by a laptop battery bank, and takes input from a webcam. Direction and throttle can be computed by the NUC and sent to an Arduino which handles the car control. There is also a radio control channel allowing the car to be switched from autonomous to human controlled to emergency stop modes.

They go into detail on the polarizing and neutral density filters they used with their webcam, something that may make interesting reading for anyone interested in machine vision. All their code is open source, and can be found linked from their write-up. Meanwhile the video below the break shows their machine on their test circuit, completing it with varying levels of success.

Continue reading “Self-Driving R/C Car Uses An Intel NUC”

TensorFlow Robot Recognizes Objects

Children can do lots of things that robots and computers have trouble with. Climbing stairs, for example, is a tough thing for a robot. Recognizing objects is another area where humans are generally much better than robots. Kids can recognize blocks, shapes, colors, and extrapolate combinations and transformations.

Google’s open-source TensorFlow software can help. It is a machine learning system used in Google’s own speech recognition, search, and other products. It is also used in quite a few non-Google projects. [Lukas Biewald] recently built a robot around some stock pieces (including a Raspberry Pi) and enlisted TensorFlow to allow the robot to recognize objects. You can see a video of the device, below.

Continue reading “TensorFlow Robot Recognizes Objects”