[Will Forfang] found a app that lets you take a picture of a math equation with a phone and ask for a solution. However, the app wouldn’t read handwritten equations, so [Will] decided to see how hard that would be, using a neural network.
The results are pretty impressive (you can also see the video below). [Will] used his own handwriting on a chalkboard and had the network train on that. He also went even further and added some heuristics to identify fraction bars and infer the grouping from the relative size of the bars.
We wake up this morning to the news that Google’s deep-search neural network project called AlphaGo has beaten the second ranked world Go master (who happens to be a human being). This is the first of five matches between the two adversaries that will play out this week.
On one hand, this is a sign of maturing technology. It has been almost twenty years since Deep Blue beat Gary Kasparov, the reigning chess world champion at the time. Although there are still four games to play against Lee Sedol, it was recently reported that AlphaGo beat European Go champion Fan Hui in five games straight. Go is generally considered a more difficult game for machine minds to play than chess. This is because Go has a much larger pool of possible moves at any given time.
Does This Matter?
Okay, the news part of this event has been covered: machine beats man. Does it matter? Will this affect your life and how? We want to hear what you think in the comments below. But I’m going to keep going with some of my thoughts on the topic.
Let’s look first at what AlphaGo did to win. At its core, the game of Go is won by figuring out where your opponent will likely make a low-percentage move and then capitalizing on that choice. Know Your Enemy has been a tenet of strategy for a few millennia now and it holds true in the digital age. In addition to the rules of the game, AlphaGo was fed a healthy diet of 30 million positions from expert games. This builds behavior recognition into the system. Not just what moves can be made, but what moves are most likely to be made.
DeepMind, the company behind AlphaGo which was acquired by Google in 2014, has published a paper in Nature about their approach. They were even nice enough to let us read without dealing with a paywall. The secret sauce is the learning process which at its core tries to mimic how living entities learn: observe repetitively while assigning values to outcomes. This is key as it leads past “intellect”, to “intelligence” (the “I” in AI that everyone seems to be waiting for). But this is a bastardized version of “intelligence”. AlphaGo is able to recognize and predict behavior, then make choices that lead to a desired outcome. This is more than intellect as it does value the purpose of an opponent’s decisions. But it falls short of intelligence as AlphaGo doesn’t consciously understand the purpose it has detected. In my mind this is exactly what we need. Truly successful machine learning will be able to make sense out of sometimes irrational input.
The paper from Nature doesn’t go into details about Go, but it explains the approach of the learning system applied to Atari 2600. The algorithm was given 210×160 color video at 60Hz as an input and then told it could use a joystick with one button. From there it taught itself to play 49 games. It was not told the purpose or the rules of the games, but it was given examples of scores from human performance and rewarded for its own quality performances. The chart above shows that it learned to play 29 of them at or above human skill levels.
There are a surprising number of Raspberry Pis being used in industrial equipment. This means the Arduino is left behind, but no longer. There’s your PLCs that use Arduinos.
A few weeks ago, Google introduced a machine intelligence and computer vision technique that made the world look psychedelic. Now, this library is available. On another note, head mounted displays exist, and a sufficiently creative person could mash these two things together into a very, very cool project.
Welcome to Kickstarter! Kickstarter is an uphill battle. People will doubt you because you don’t have a ‘target audience’ or ‘the rights to this franchise’ or ‘any talent whatsoever’, but that’s what crowdfunding is for!
Several years ago, Apple shipped a few million 17″ iMacs with defective displays. They’re still useful computers, though, especially if you can find a replacement LCD. Apple, in all its wisdom, used a weird connector for this LCD. Here’s the adapter board, and this adapter will allow displays running up to 1920×1200.
[Jan] has earned a reputation of building some very cool synths out of single ARM chips. His previous build was a Drumulator and now he’s shrinkified it. He’s put four drum sounds, pitch CV, and audio out on an 8-pin DIP ARM.
YouTube gives you cadmium! [AvE], recently got 100,000 subscribers on his YouTube channel. Apparently, YouTube sends you a terrible belt buckle when you manage to do that. At least he did it without playing video games and screaming.
Most electronic components available today are just improved versions of what was available a few years ago. Microcontrollers get faster, memories get larger, and sensors get smaller, but we haven’t seen a truly novel component for years or even decades. There is no electronic component more interesting with more novel applications than the memristor, and now they’re available commercially from Knowm, a company that is on the bleeding edge of putting machine learning directly onto silicon.
The entire point of digital circuits is to store information as a series of ones and zeros. Memristors as well store information, but do so in a completely analog way. Each memristor changes its own resistance in response to the current going through it; ‘writing’ a positive voltage lowers the resistance, and ‘writing’ a negative voltage puts the device back into a high resistance state.
This new memristor is based on research done by [Dr. Kris Campbell] of Boise State University – the same researcher responsible for silver chalcogenide memristors we saw earlier this year. Like these earlier devices, the Knowm memristror is built using silver chalcogenide molecules. To lower the resistance of the memristor, a positive voltage ‘pulls’ silver ions into the metal chalcogenide layer. The silver ions stay in this chalcogenide layer until they are ‘pushed’ back with the application of a negative voltage. This gives the memristor it’s core functionality – being able to remember how much current has gone through it.
This technology is different from the first memristors made by HP in 2008, and has allowed Knowm to create functional memristors on silicon with a relatively high yield. Knowm is currently selling a ‘tier 3’ memristor part that only has two out of eight devices failing QC testing. A ‘tier 1’ part, with all eight memristors working, is available for $220 USD.
As for applications for this memristor, Knowm is using this technology in something they call Thermodynamic RAM, or kT-RAM. This is a small coprocessor that allows for faster machine learning than would be possible with a computer with a much more traditional architecture. This kT-RAM uses a binary tree layout with memristors serving as the links between nodes.
While it’s much too soon to say if a kT-RAM processor will be better or more efficient at performing machine learning tasks in real life, a machine learning coprocessor does have a faint echo of the machine learning silicon developed during the 80s AI renaissance. Thirty years ago, neural nets on a chip were created by a few companies around Boston, until someone realized these neural nets could be simulated on a desktop PC much more efficiently. The kT-RAM is somewhat novel and highly parallel, though, and with a new electronic component it could be just what is needed to push machine learning directly into silicon.