AI Finds More Space Chatter

Scientists don’t know exactly what fast radio bursts (FRBs) are. What they do know is that they come from a long way away. In fact, one that occurs regularly comes from a galaxy 3 billion light years away. They could form from neutron stars or they could be extraterrestrials phoning home. The other thing is — thanks to machine learning — we now know about a lot more of them. You can see a video from Berkeley, below. and find more technical information, raw data, and [Danielle Futselaar’s] killer project graphic seen above from at their site.

The first FRB came to the attention of [Duncan Lorimer] and [David Narkevic] in 2007 while sifting through data from 2001. These broadband bursts are hard to identify since they last a matter of milliseconds. Researchers at Berkeley trained software using previously known FRBs. They then gave the software 5 hours of recordings of activity from one part of the sky and found 72 previously unknown FRBs.

Continue reading “AI Finds More Space Chatter”

Katherine Scott: Earth’s Daily Photo Through 200 Cubesat Cameras

Every year at Supercon there is a critical mass of awesome people, and last year Sophi Kravitz was able to sneak away from the festivities for this interview with Katherine Scott. Kat was a judge for the 2017 Hackaday Prize. She specializes in computer vision, robotics, and manufacturing and was the image analytics team lead at Planet Labs when this interview was filmed.

You’re going to chuckle at the beginning of the video as Kat and Sophi recount the kind of highjinks going on at the con. In the hardware hacking area there were impromptu experiments in melting aluminum with gallium, and one of the afternoon’s organized workshop combined wood and high voltage to create lichtenberg figures. Does anyone else smell burning? Don’t forget to grab your 2018 Hackaday Superconference tickets and join in the fun this year!

Below you’ll find the interview which dives into Kat’s work with satellite imaging.

Continue reading “Katherine Scott: Earth’s Daily Photo Through 200 Cubesat Cameras”

Hummingbirds, 3D Printing, and Deep Learning

Setting camera traps in your garden to see what local wildlife is around is quite popular. But [Chris Lam] has just one subject in mind: the hummingbird. He devised a custom setup to capture the footage he wanted using some neat tech.

To attract the hummingbirds, [Chris] used an off-the-shelf feeder — no need to re-invent the wheel there. To obtain the closeup footage required, a 4K action cam was used. This was attached to the feeder with a 3D-printed mount that [Chris] designed.

When it came to detecting the presence of a hummingbird in the video, there were various approaches that could have been considered. On the hardware side, PIR and ultrasonic distance sensors are popular for projects of this kind, but [Chris] wanted a pure software solution. The commonly used motion detection libraries for this type of project might have fallen over here, since the whole feeder was swinging in the air on a string, so [Chris] opted for machine learning.

A RESNET architecture was used to run a classification on each frame, to determine if the image contained a hummingbird or not. The initial attempt was not greatly successful, but after cropping the image to a smaller area around the feeder, classification accuracy greatly increased. After a bit of FFmpeg magic, the selected snippets were concatenated to make one video containing all the interesting parts; you can see the result in the clip after the break.

It seems that machine learning and wildlife cams are a match made in heaven. We’ve already written about a proof-of-concept project which identifies different animals in the footage when motion is detected.

Continue reading “Hummingbirds, 3D Printing, and Deep Learning”

Facebook Wants to Teach Machine Learning

When you think of technical education about machine learning, Facebook might not be the company that pops into your head. However, the company uses machine learning, and they’ve rolled out a six-part video series that they say “shares best real-world practices and provides practical tips about how to apply machine-learning capabilities to real-world problems.”

The videos correspond to what they say are the six aspects of machine learning development:

  1. Problem definition
  2. Data
  3. Evaluation
  4. Features
  5. Model
  6. Experimentation

Continue reading “Facebook Wants to Teach Machine Learning”

Disney’s New Robot Limbs Trained Using Neural Networks

Disney is working on modular, intelligent robot limbs that snap into place with magnets. The intelligence comes from a reasonable sized neural network that also incorporates some modularity. The robot is their Snapbot whose base unit can fit up to eight of limbs, and so far they’ve trained with up to three together.

The modularity further extends to a choice of three types of limb. One with roll and pitch, another with yaw and pitch, and a third with roll, yaw, and pitch. Interestingly, of the three types, the yaw-pitch one seems most effective.

Learning environment for Disney's modular robot legsIn this age of massive, deep neural networks requiring GPUs or even online services for training in a reasonable amount of time, it’s refreshing to see that this one’s only two layers deep and can be trained in three hours on a single-core, 3.4 GHz Intel i7 processor. Three hours may still seem long, but remember, this isn’t a simulation in a silicon virtual world. This is real-life where the servo motors have to actually move. Of course, they didn’t want to sit around and reset it after each attempt to move across the table so they built in an automatic mechanism to pull the robot back to the starting position before trying to cross the table again. To further speed training, they found that once they’d trained for one limb, they could then copy the last of the network’s layers to get a head starting on the training for two limbs.

Why do training? Afterall, we’ve seen pretty awesome multi-limbed robots working with manual coding, an example being this hexapod tank based on one from the movie Ghost in the Shell. They did that too and then compared the results of the manual approach with those of the trained one and the trained one moved further in the same amount of time. At a minimum, we can learn a trick or two from this modular crawler.

Check out their article for the details and watch it in action in its learning environment below.

Continue reading “Disney’s New Robot Limbs Trained Using Neural Networks”

Nvidia Transforms Standard Video Into Slow Motion Using AI

Nvidia is back at it again with another awesome demo of applied machine learning: artificially transforming standard video into slow motion – they’re so good at showing off what AI can do that anyone would think they were trying to sell hardware for it.

Though most modern phones and cameras have an option to record in slow motion, it often comes at the expense of resolution, and always at the expense of storage space. For really high frame rates you’ll need a specialist camera, and you often don’t know that you should be filming in slow motion until after an event has occurred. Wouldn’t it be nice if we could just convert standard video to slow motion after it was recorded?

That’s just what Nvidia has done, all nicely documented in a paper. At its heart, the algorithm must take two frames, and artificially create one or more frames in between. This is not a manual algorithm that interpolates frames, this is a fully fledged deep-learning system. The Convolutional Neural Network (CNN) was trained on over a thousand videos – roughly 300k individual frames.

Since none of the parameters of the CNN are time-dependent, it’s possible to generate as many intermediate frames as required, something which sets this solution apart from previous approaches.  In some of the shots in their demo video, 30fps video is converted to 240fps; this requires the creation of 7 additional frames for every pair of consecutive frames.

The video after the break is seriously impressive, though if you look carefully you can see the odd imperfection, like the hockey player’s skate or dancer’s arm. Deep learning is as much an art as a science, and if you understood all of the research paper then you’re doing pretty darn well. For the rest of us, get up to speed by wrapping your head around neural networks, and trying out the simplest Tensorflow example.

Continue reading “Nvidia Transforms Standard Video Into Slow Motion Using AI”

Stock Market Prediction With Natural Language Machine Learning

Machines – is there anything they can’t learn? 20 years ago, the answer to that question would be very different. However, with modern processing power and deep learning tools, it seems that computers are getting quite nifty in the brainpower department. In that vein, a research group attempted to use machine learning tools to predict stock market performance, based on publicly available earnings documents. 

The team used the Azure Machine Learning Workbench to build their model, one of many tools now out in the marketplace for such work. To train their model, earnings releases were combined with stock price data before and after the announcements were made. Natural language processing was used to interpret the earnings releases, with steps taken to purify the input by removing stop words, punctuation, and other ephemera. The model then attempted to find a relationship between the language content of the releases and the following impact on the stock price.

Particularly interesting were the vocabulary issues the team faced throughout the development process. In many industries, there is a significant amount of jargon – that is, vocabulary that is highly specific to the topic in question. The team decided to work around this, by comparing stocks on an industry-by-industry basis. There’s little reason to be looking at phrases like “blood pressure medication” and “kidney stones” when you’re comparing stocks in the defence electronics industry, after all.

With a model built, the team put it to the test. Stocks were sorted into 3 bins —  low performing, middle performing, and high performing. Their most successful result was a 62% chance of predicting a low performing stock, well above the threshold for chance. This suggests that there’s plenty of scope for further improvement in this area. As with anything in the stock market space, expect development in this area to continue at a furious pace.

We’ve seen machine learning do great things before, too – even creative tasks, like naming tomatoes.