Stock Market Prediction With Natural Language Machine Learning

Machines – is there anything they can’t learn? 20 years ago, the answer to that question would be very different. However, with modern processing power and deep learning tools, it seems that computers are getting quite nifty in the brainpower department. In that vein, a research group attempted to use machine learning tools to predict stock market performance, based on publicly available earnings documents. 

The team used the Azure Machine Learning Workbench to build their model, one of many tools now out in the marketplace for such work. To train their model, earnings releases were combined with stock price data before and after the announcements were made. Natural language processing was used to interpret the earnings releases, with steps taken to purify the input by removing stop words, punctuation, and other ephemera. The model then attempted to find a relationship between the language content of the releases and the following impact on the stock price.

Particularly interesting were the vocabulary issues the team faced throughout the development process. In many industries, there is a significant amount of jargon – that is, vocabulary that is highly specific to the topic in question. The team decided to work around this, by comparing stocks on an industry-by-industry basis. There’s little reason to be looking at phrases like “blood pressure medication” and “kidney stones” when you’re comparing stocks in the defence electronics industry, after all.

With a model built, the team put it to the test. Stocks were sorted into 3 bins —  low performing, middle performing, and high performing. Their most successful result was a 62% chance of predicting a low performing stock, well above the threshold for chance. This suggests that there’s plenty of scope for further improvement in this area. As with anything in the stock market space, expect development in this area to continue at a furious pace.

We’ve seen machine learning do great things before, too – even creative tasks, like naming tomatoes. 

Machine Learning Crash Course From Google

We’ve been talking a lot about machine learning lately. People are using it for speech generation and recognition, computer vision, and even classifying radio signals. If you’ve yet to climb the learning curve, you might be interested in a new free class from Google using TensorFlow.

Of course, we’ve covered tutorials for TensorFlow before, but this is structured as a 15 hour class with 25 lessons and 40 exercises. Of course, it is also from the horse’s mouth, so to speak. Google says the class will answer questions like:

  • How does machine learning differ from traditional programming?
  • What is loss, and how do I measure it?
  • How does gradient descent work?
  • How do I determine whether my model is effective?
  • How do I represent my data so that a program can learn from it?
  • How do I build a deep neural network?

Continue reading “Machine Learning Crash Course From Google”

Learning Software In A Soft Exosuit

Wearables and robots don’t often intersect, because most robots rely on rigid bodies and programming while we don’t. Exoskeletons are an instance where robots interact with our bodies, and a soft exosuit is even closer to our physiology. Machine learning is closer to our minds than a simple state machine. The combination of machine learning software and a soft exosuit is a match made in heaven for the Harvard Biodesign Lab and Agile Robotics Lab.

Machine learning studies a walker’s steady gait for twenty periods while vitals are monitored to assess how much energy is being expended. After watching, the taught machine assists instead of assessing. This type of personalization has been done in the past, but the addition of machine learning shows that the necessary customization can be programmed into each machine without a team of humans.

Exoskeletons are no stranger to these pages, our 2017 Hackaday Prize gave $1000 to an open-source set of robotic legs and reported on an exoskeleton to keep seniors safe.

Continue reading “Learning Software In A Soft Exosuit”

TensorFlow in your Browser

If you want to explore machine learning, you can now write applications that train and deploy TensorFlow in your browser using JavaScript. We know what you are thinking. That has to be slow. Surprisingly, it isn’t, since the libraries use Graphics Processing Unit (GPU) acceleration. Of course, that assumes your browser can use your GPU. There are several demos available, include one where you train a Pac Man game to respond to gestures in your webcam to control the game. If you try it and then disable accelerated graphics in your browser options, you’ll see just what a speed up you can gain from the GPU.

Continue reading “TensorFlow in your Browser”

Google Builds A Synthesizer With Neural Nets And Raspberry Pis.

AI is the new hotness! It’s 1965 or 1985 all over again! We’re in the AI Rennisance Mk. 2, and Google, in an attempt to showcase how AI can allow creators to be more… creative has released a synthesizer built around neural networks.

The NSynth Super is an experimental physical interface from Magenta, a research group within the Big G that explores how machine learning tools can create art and music in new ways. The NSynth Super does this by mashing together a Kaoss Pad, samples that sound like General MIDI patches, and a neural network.

Here’s how the NSynth works: The NSynth hardware accepts MIDI signals from a keyboard, DAW, or whatever. These MIDI commands are fed into an openFrameworks app that uses pre-compiled (with Machine Learning™!) samples from various instruments. This openFrameworks app combines and mixes these samples in relation to whatever the user inputs via the NSynth controller. If you’ve ever wanted to hear what the combination of a snare drum and a bassoon sounds like, this does it. Basically, you’re looking at a Kaoss pad controlling rompler that takes four samples and combines them, with the power of Neural Networks. The project comes with a set of pre-compiled and neural networked samples, but you can use this interface to mix your own samples, provided you have a beefy computer with an expensive GPU.

Not to undermine the work that went into this project, but thousands of synth heads will be disappointed by this project. The creation of new audio samples requires training with a GPU; the hardest and most computationally expensive part of neural networks is the training, not the performance. Without a nice graphics card, you’re limited to whatever samples Google has provided here.

Since this is Open Source, all the files are available, and it’s a project that uses a Raspberry Pi with a laser-cut enclosure, there is a huge demand for this machine learning Kaoss pad. The good news is that there’s a group buy on Hackaday.io, and there’s already a seller on Tindie should you want a bare PCB. You can, of course, roll your own, and the Digikey cart for all the SMD parts comes to about $40 USD. This doesn’t include the OLED ($2 from China), the Raspberry Pi, or the laser cut enclosure, but it’s a start. Of course, for those of you who haven’t passed the 0805 SMD solder test, it looks like a few people will be selling assembled versions (less Pi) for $50-$60.

Is it cool? Yes, but a basement-bound producer that wants to add this to a track will quickly learn that training machine learning algorithms cost far more than playing with machine algorithms. The hardware is neat, but brace yourself for disappointment. Just like AI suffered in the late 60s and the late 80s. We’re in the AI Renaissance Mk. 2, after all.

Continue reading “Google Builds A Synthesizer With Neural Nets And Raspberry Pis.”

This Radio Gets Pour Reception

When was the last time you poured water onto your radio to turn it on?

Designed collaboratively by [Tore Knudsen], [Simone Okholm Hansen] and [Victor Permild], Pour Reception seeks to challenge what constitutes an interface, and how elements of play can create a new experience for a relatively everyday object.

Lacking buttons or knobs of any kind, Pour Reception appears an inert acrylic box with two glasses resting on top. A detachable instruction card cues the need for water, and pouring some into the glasses wakes the radio.

Continue reading “This Radio Gets Pour Reception”

AI Listens to Radio

We’ve seen plenty of examples of neural networks listening to speech, reading characters, or identifying images. KickView had a different idea. They wanted to learn to recognize radio signals. Not just any radio signals, but Orthogonal Frequency Division Multiplexing (OFDM) waveforms.

OFDM is a modulation method used by WiFi, cable systems, and many other systems. In particular, they look at an 802.11g signal with a bandwidth of 20 MHz. The question is given a receiver for 802.11g, how can you reliably detect that an 802.11ac signal — up to 160 MHz — is using your channel? To demonstrate the technique they decided to detect 20 MHz signals using a 5 MHz bandwidth.

Continue reading “AI Listens to Radio”