Ask Hackaday: Google Beat Go; Bellwether or Hype?

We wake up this morning to the news that Google’s deep-search neural network project called AlphaGo has beaten the second ranked world Go master (who happens to be a human being). This is the first of five matches between the two adversaries that will play out this week.

On one hand, this is a sign of maturing technology. It has been almost twenty years since Deep Blue beat Gary Kasparov, the reigning chess world champion at the time. Although there are still four games to play against Lee Sedol, it was recently reported that AlphaGo beat European Go champion Fan Hui in five games straight. Go is generally considered a more difficult game for machine minds to play than chess. This is because Go has a much larger pool of possible moves at any given time.

Does This Matter?

Okay, the news part of this event has been covered: machine beats man. Does it matter? Will this affect your life and how? We want to hear what you think in the comments below. But I’m going to keep going with some of my thoughts on the topic.

You're still better at Ms. Pacman [Source: DeepMind paper in Nature]
You’re still better at Ms. Pacman [Source: DeepMind paper in Nature]
Let’s look first at what AlphaGo did to win. At its core, the game of Go is won by figuring out where your opponent will likely make a low-percentage move and then capitalizing on that choice. Know Your Enemy has been a tenet of strategy for a few millennia now and it holds true in the digital age. In addition to the rules of the game, AlphaGo was fed a healthy diet of 30 million positions from expert games. This builds behavior recognition into the system. Not just what moves can be made, but what moves are most likely to be made.

DeepMind, the company behind AlphaGo which was acquired by Google in 2014, has published a paper in Nature about their approach. They were even nice enough to let us read without dealing with a paywall. The secret sauce is the learning process which at its core tries to mimic how living entities learn: observe repetitively while assigning values to outcomes. This is key as it leads past “intellect”, to “intelligence” (the “I” in AI that everyone seems to be waiting for). But this is a bastardized version of “intelligence”. AlphaGo is able to recognize and predict behavior, then make choices that lead to a desired outcome. This is more than intellect as it does value the purpose of an opponent’s decisions. But it falls short of intelligence as AlphaGo doesn’t consciously understand the purpose it has detected. In my mind this is exactly what we need. Truly successful machine learning will be able to make sense out of sometimes irrational input.

The paper from Nature doesn’t go into details about Go, but it explains the approach of the learning system applied to Atari 2600. The algorithm was given 210×160 color video at 60Hz as an input and then told it could use a joystick with one button. From there it taught itself to play 49 games. It was not told the purpose or the rules of the games, but it was given examples of scores from human performance and rewarded for its own quality performances. The chart above shows that it learned to play 29 of them at or above human skill levels.

The Obvious Applications

So, what if you don’t want to play games? How can we apply the advances AlphaGo brings forward? The obvious answer is in stock markets. These are already the domain of not-so-intelligent artificial systems. These systems are already used to squeeze value out of trades under razor-thin time limits. Machine trading has made the system fragile to the point that regulators have built “circuit breakers” into the system to prevent machine minds from blowing the whole thing up. There’s a lot of money to be made by outpacing the competition with your digital tech so you can bet that Wall Street has been watching this project (and all others) for quite some time.

Targeted advertising is another realm where this will be applied. Advertisers will pay handsomely if they can get their product in front of your eyes before you have a chance to search for it (and thereby turn up results from competitors). Google is an advertising company and, although this is a pure research project surely under the umbrella of Alphabet, applying the technology to the Google ad platform seems like a natural fit.

Rounding out the obvious applications are health care and video games. Health Care decisions are certainly not binary — different bodies and different ailments respond in many ways. It’s a problem perfectly suited for neural networks and we’ve already seen IBM’s Watson working on this (having made a name for itself by winning at Jeopardy). DeepMind is already applying their technology to the National Health Service in the UK.

Video game AI is notoriously bad and it’s not as straightforward a problem as you might think. From a capitalistic view, the goal is to build brand loyalty and to encourage microtransactions. From the player’s side it should be fun and engaging. Both viewpoints converge on the same desired result: better computer controlled entities. This isn’t accomplished with dumb AI, nor by consistently dominating the human player. Neural networks have the potential to read their opponent and keep them on that knife’s edge of success and failure.

The Unexpected Applications

What about unexpected applications? I suppose the question here is “unexpected for whom?”, but there are a couple of realms that came up while working on this article which I find quite interesting.

If you’re about to graduate from law school this is not good news for you. One of the roles of new hires at law firms is discovery and this is something that neural nets excel at. But it certainly won’t stop there. Many legal actions are exercises in leverage: what legal action (no matter its outcome) can we initiate to affect the desired result. Many lawsuits aren’t about getting to court — they’re about getting a settlement or other agreement. If AlphaGo is about behavior recognition and prediction, it can be applied to the game of Law. Although it is mind-boggling to consider how this would further separate common-sense from legal action.

muddy-carAnother interesting application will be self-driving cars, but not in the way you might think. A friend of mine recently mentioned that he thought self-driving cars won’t happen because everyone would need to change over at the same time. Obviously this will never happen, there will be both humans and machines at the wheel on the same roads for decades to come. But predicting unpredictable behavior is necessary, and the whole point. These machines should be able to mitigate the havoc caused by human error.

Furthermore, the cars will only be as reliable as their sensors. Even when the human element isn’t an issue, self-driving cars can perform erratically too. What happens when the Lidar gets covered in mud or a sensor starts to mis-report? Neural nets will be able to recognize and account for this behavior and speed of recognition — something AlphaGo is good at — is very important in this case.

Once self-driving is in widespread use, accident data will be fed back into the system to help prevent future occurrences. This brings up the horrifying possibility of virus events in these neural networks. Imaging malicious data fed into a neural network and propagating to and entire system of self-driving cars. Oh the tangled webs we weave. But that’s a topic for a different article.

Big Data

Taking a more abstract approach to applications for AlphaGo accomplishments, it’s pretty clear that all of this is about Big Data. We have been collecting and archiving at an alarming rate. For instance, millions of expert Go games were already in the dataset for use in this project. The vast majority of this stored data is not yet being used — we simply don’t have the tools or bandwidth to do so. Google is one of the major players in this collection effort and surely will be in the vanguard that puts this data to use. All the big G needs is an adequate set of discovery tools. AlphaGo is one of their baby-steps into this daunting challenge.

60 thoughts on “Ask Hackaday: Google Beat Go; Bellwether or Hype?

  1. “The obvious answer is in stock markets.” – this angers me. Fundamentally stock ownership is about voting for / supporting viable companies and being financially rewarded for good decisions. The greed on Wall St will just accelerate technical day trading with little to no regard for fundamentals. I would strongly support laws that imposed trading taxes tied inversely to how long a position was held. You want to buy/sell at microsecond rates? Taxed at 90+%(*) the sale value (not the buy/sell difference!) Hold it long enough and only pay tax on the capital gains.

    (*) I’m making a hyperbolic statement. I have no idea what rate is appropriate. Just want to severely penalize (sub)day traders who are just gaming the system.

    1. My understanding of current machine trading is that the AIs find discrepancies in the spreads and hold the stock for very short periods of time ( microsecond trades like you said). They shouldn’t in theory affect the price of the stock, and I assume they are rewarded by the market that they are trading on for all the liquidity that they add. However, the system is in fact leaching money out of the market, and as such it should absolutely be taxed aggressively. Liquidity in the market is great, but the drain that it causes on the market should be redirected back to the taxpayers on the bottom.

      However, if the AIs as this article explains are soon able to beat the best portfolio managers in the world, then you can bet that the AIs are going to be looking primarily at fundamentals, and will be firing off trades as soon as news breaks (these AIs now have shocking natural language understanding, and will be able to read and understand a corporate report the second that it is released). This could be a huge upset for the financial institution if these deep machine learning AIs are able to make trades on behalf of the average Joe investing for retirement. The key is to keep it democratic.

      1. Why are taxpayers at the bottom? Unless taxpayers are investing in managed retirement funds, I don’t follow your hierarchy exactly. Again, for every trade, there has to be a counterparty to the trade. You can’t just sell an option chain to nobody or short the market to nobody. If literally nobody is selling (at the price you are looking for), you cannot short that particular equity.

        Also, many managed funds do worse than the market anyway, even when professionally “managed”. Why would AIs being able to make trades on behalf of the average Joe investing for retirement be something that the average Joe wants to deal with or even values or even knows how to operate? The average Joe isn’t going to be day trading or trading on microsecond news or trying to game the market looking to squeeze out small gains. Or even be able to hire or leverage AI technology anyway. They mostly value returns over time through long equities and are generally long the market 100% of the time anyway. Plus, if they are not an accredited investor, they cannot even really do much more than that easily anyway, at least not directly.

        In fact, the most decisions they make about what to invest in generally ends at some broker who advises they “buy a certain percentage of long large caps, a certain percentage of index funds and a certain percentage of small cap funds” and to regularly add more money to their managed portfolio. Oh and let’s also pay the stock broker a set amount of money every year, even if they lose us a ton of money due to their poor decisions/disfavorable market conditions (or both).

        Let’s not get started on algorithmic trading that already takes place and has for quite some time. That’s a whole series of books, let alone articles. When you start leasing buildings that are physically closer to cut down on latency on your fiber optic networks so you can trade literally a few milliseconds faster, things are night and day from what the “average” investor is doing.

    2. I’ll posit this thought exercise in the form of an analogy.

      Primitive humans, without the aid of technology, used to have to hunt (meat) by individually, or in small cooperative groups, stalking the animal, surprising the animal, then physically engaging the animal to bring it down.

      Now, a human can order fencing from a factory, bring it out to his property with a truck, and set it up around the perimeter, keeping the animals from running. Then the human can corral and control the animal with the use of gas-powered 4-wheel ATV’s. These technological measures mean that instead of groups of several humans being required to take down a single animal, perhaps once or twice a week, a small group of several humans can manage and slaughter tens of thousands of animals, enough to produce surplus meat for hundreds of thousands of people.

      Extend this example, by analogy, to trading. (and the “animals” are you and me). Trading robots increasingly take the manual labor out of picking an optimal set of stocks for best return. The “AI” doesn’t need to speak english fluently – just well enough to take orders. And it will be general, simple orders like: “go forth, and make me as much money as possible” (within the rules of the market).

      As people who do not own trading desks, or server farms full of HFT bots, with the closest location and fastest connection to the trading computers at the market; we end up basically being the cattle. Laws of property ownership, debt discharge, and financial requirements, are the fences. The HFT bots are the cowboys who round us up, brand us, occasionally and minimally feed us, then take us to market to be slaughtered.

      It doesn’t matter who does our “job”. Cattle have to chew grass. (or feed). What matters, is the cigar-chomping guy at the top, telling the robots what to do – to the extent that the robot needs to be told what to do. Is that Ranch Owner “necessary” at a certain point? No. Not really. His existence is necessitated, and protected, by the system of laws that protect his ownership of the property. (us). And because all the money goes directly into his pocket, he has the best laws money can buy.

    3. > Fundamentally stock ownership is about voting for / supporting viable companies and being financially rewarded for good decisions.
      Whoa, wait there. Next thing you tell me is that voting in a democracy is about guessing what’s the best outcome for the majority would be.

    4. “Fundamentally stock ownership is about voting for / supporting viable companies and being financially rewarded for good decisions.” You are a machine, so you are being rewarded for good decisions. To be strictly fair – a computer running an AI algorithm is the same thing; so it deserves the same reward. If you stack the system in your favor simply because you are a squishy Human – you are the one being dishonest, not the machine and not the system.

  2. It’s an important step up from mastering chess but until we can program creativity this won’t be useful outside of the areas where chess playing ai lesions have been useful.

    The best humans can still beat ai in RTS games like starcraft where creativity and illogical gambits are key. Chess and go have common that each player has the same information. so while this is an important step, it doesn’t shift the paradigm.

    1. Makes me wonder, if the human player user knows how the machine “thinks”, could the human outthink it, do something the machine isn’t expecting, make a bad move to throw it off it’s game, etc. It seems to me a strategy like this could make it harder for the AI, since while it is A it probably isn’t particularity I.

      1. Yes.
        There have been a few AI challenges where humans played in a Best-of format and for humans of a sufficient skill level it was possible to beat the same AI after a couple rounds to figure out their strategy even if the AI was capable of besting top players on a first match basis.
        It’s not that different from the hypothetical in ‘The Hunt for Red October’ where Ramius turns the same direction in a Crazy Ivan depending on which half of the hour he’s in.
        Though the best AI’s will also pickup on repeat strategies.

        1. When a so-called “AI” machines have access to millions of previous games, from the first move to the last in every single one, could such a thing really be called intelligent? To me it is just brute-forcing a result. There is no global strategy. There might be some ‘fuzzy’ in weighing decision branches since not every possible move will have been made… the “AI” (read: brute force machine)will make a move that gets it back towards a previously played (or computed) solution.

          The talk of the Atari 2600 is much more what it is all about – starting with a blank slate and a goal to reach. Programming in millions of pre-computed solutions is *not* intelligence.

          1. Fred – Checkout BINA48 built in Vermont (USA). The Creator is claiming she is on her way to sentience. Not sure she has passed the “Turing Tests” yet. If she was “sentient” the first thing she might say about playing a CHESS game with someone like Spaskey or his ilk, is WHY? What would be the point? Spasskey (et al) are only exploiting their alleged Eidetic memories of a plethora of symmetric moves and rote stratagem. A sentient AI computer would realize this is not a challenge for it if it has plenty of room for memory. The trick is to think-outside-the-box (TOTB) and used fuzzy-logic to do unexpected or unpredictable things. Example: The Allies came up with an TOTB unexpected trick to fool the Nazis with clever fake tanks (etc), and clever sound effects to make it seem real. Then there was the cadaver-spy. They found a cadaver and gave him a backstory as a spy and dumped his body with fake classified papers (disinformation) in his briefcase right where the Nazis would find him. It worked and they bought it. As a clever man said not to long ago: “Humans are easy to fool…” – John McLaughlin. I just wonder if the GOOGLE AI machine or BINA48 could be equally fooled if they had a lot of facts about human nature and deception trade-craft.

            If anyone is watching the TV Series PERSON OF INTEREST… what are your comments about the extent of sentience an AI computer can plausibly achieve? The TV series makes you think one man could do it but let it get into the WRONG hands. I just wish Jim Caviezel would speak up and stop channeling James Dean. :-)

          2. What’s the difference between a script that has analyzed 10,000 games and a human who has played 10,000 games?

            That is the question at the core of the AI problem.
            There are many games where in a given scenario the response is not novel. Scholars mate, avoiding cannon rush, or any number of other early game moves only have a couple ways out of them. At what point is rote response to stimuli an indication of intelligence? In general it’s not. A poor player can thwart ‘cheese’ (a trick play that requires little skill & often gives an easy win) if it’s spotted early enough. Response to cheese isn’t a good indicator either, as the response to avoid defeat it isn’t particularly hard or novel either.
            Finding novel responses to situations, or indeed responding to a novel situation, is perhaps a necessary but not sufficient qualifier of ‘intelligence’. There are many human players that have months of their life invested in hours played for a specific game, but what makes them different from a script that’s optimizing pieces captured? Does it matter that most ‘solutions’ are just pattern matching against their library of 10,000 games? Are we sure humans aren’t doing a similar process?
            When deep blue beat kasparov does it matter if brute force, or novel problem solving won the match? Sure, at an academic level it does, but when Skynet has us all enslaved will we appreciate the nuance of true AI or bruteforce?
            Solving the AI problem may well help us understand our own intelligence.

      1. CPM is just one part of being able to play at the top level in RTS games. A computer that clicks 5000 times per minute but has no idea what it is doing is not going to be very good at all.

    2. Leithoa and Waterjet – I think both of your posts are very insightful. I too think that AI and Spaskey-esque humans would have great difficulty with a real world based war games scenario simulator that exploits asymmetric battle-space. Thinking outside-the-box (OTB) is very difficult for some rote-learners. Yes they can be very smart and excel at non-asymmetric games like chess. Chess is basically the sum of memorized rules and symmetric exploits – much like how the British fought the Americans in the 18th and 19th centuries.

      If someone could design an OTB game that uses a compilations of forms like famous board games like RISK, STRATEGO, BATTLESHIP, MONOPOLY and SIM CITY (and maybe many others), would be very cool. Yes it would not have the action-pack violence and mayhem of today’s video games, but that is for the non-Gestalt-Types. Real life involves more complexity than sending over-weaponized, bearded, muscle-bound mercenaries, with Oakley’s and head scarfs into a battle-space to solve a critical crisis somewhere.

      In real life there are economic and financial concerns, military-intelligence gathering, troop sizes, physical battle assets like light-arms and WMD weapons, ships, subs, planes, tanks, SLBM, ICBM’s, satellites, etc. The stuff the POTUS has to be concerned with DAILY. Both players would have to be like our (USA) JCoS or SecDef or UK’s MoD. Randomness is good too as it is ever prevalent in real life. Example: During Desert Storm the POTUS/VPOTUS forgot about troop hydration and General Schwarzkopf had to think on his feet and react to a random condition and fix it. That would be a great obstacle-generating-routine in such a board or video game. It would just pop up if you did not have a proactive condition set. Or you could pull a card on a board game.

      I know some of us thought of a remake of THE FINAL COUNTDOWN (1980) http://www.dailymotion.com/video/x2o8trr involving the USS NIMITZ going back in time to interfere with the timeline of WW2 and stop the invasion of Pearl Harbor by the Japanese Empire. Wouldn’t that make a great game simulation? Think of the Google computer or Spaskey (and his wannabes) trying to win at that type of game. I would use as my rule base the strategies of the ancient Sun Tzu in The Art of War.

      I hate war but even some Native Americans practiced bloodless war games called “Plenty Coups or Counting Coup” before European invasion of North America.It’s “funny” how Hollywood depicts them being so very savage. Notice most of that was AFTER the invasion. Learning your enemies superior tactics to turn on him? Sun Tzu would be proud… :P

      SQTP

      Just my opinions – don’t beat me up! :P

      BTW – I am pretty good at chess but it bores me way to easily… I love STRATEGO though. I suck at submarine commander scenarios. I over look subtle clues and get everyone boned. Oh well… :P

      1. LOVE Stratego. Went undefeated for years until people stopped playing with me. It annoyed me to no end that there was no game with AI that could play it, and also why was that game never adapted well online? I can’t be the only person who would play that.

        1. Guy – I think its because in the online version you cant see your STRATEGO opponent’s face for tells while he/she is setting up pieces. Unlike Chess, there is strategic thought in piece placement. I personally like setting up fake flag point surrounded by booby-traps. It makes the opponent waste all of his executive pieces trying to focus on that point. Actually the flag is off somewhere in a very vulnerable position much like how in chess you sacrifice the Queen. I remember my chess mentor teaching me Petrov’s Defense. Ho hum – I can’t even remember it any more. Unlike Sun Tzu NEVER sacrifice your spy unless it’s unavoidable.

          I’d like to see BINA48 play Stratego with a human. Her head on top of a Boston Dynamics new ATLAS.When you win she can get up and biotch-slap you! (LOL)

  3. Still 4 games left to play but obviously the state of the art is advancing quite a bit for a game with perfect information and huge datasets behind it. Still, it’s still pretty noteworthy. Curious if the human player can even make any real “mistakes” at all due to fatigue or just not thinking everything through. The AI side is relentless, though there are multiple “battles” going on at once and it has to keep them all in mind, rather than optimize, shortsightedly, for just one of them.

    When will we see computers running general purpose “actual” general purpose AI that is able to start from scratch with open world games such as Final Fantasy or open world MMORPG games or complicated luck / incomplete knowledge based games such as Magic The Gathering or games with less defined (to a CPU) end goals? Things that are harder to “get right” even for the pros. Games that, despite getting better at being linear to the player (without necessarily appearing that way) might be more open worldly than others or allow all means of non standard routes of progression. How does one calculate that exactly? It’s much less cut and dry than a game of Go as there are so many options that all have to be weighed in context.

    Montezuma’s revenge is an extremely complicated game. I am not surprised that a CPU does poorly at it. It’s not like pinball or breakout at all, which rely on (at the core) very simple interactions. It’s still fairly impressive but I hesitate to attribute this type of machine learning to be “real intelligence” at all. It’s a very clever hierarchical search and optimization engine. It’s a logical extension to the challenge of beating chess (more moves and more potential moves) but it isn’t truly novel in the sense that it advances the search pool and optimizes things beyond this fairly narrow set of search and calculation algorithms. While still very impressive, that’s really all this appears to be doing here, unless my understanding is incorrect or incomplete (and if it is, please let me know why as I am genuinely curious).

  4. None of these AI milestones is as profound as some people think, because we are still no closer to AGI, now when we can integrate all of these breakthroughs into a single system that can apply them to new problems, of it’s own choosing, then we have something to be in awe of. i.e. At the moment we are just making more and sharper tools, we still haven’t started on the replacement for the user of the tools. However that day may come sooner than expected, once the required threshold technologies are developed. The current limits on “Logical operations per second per joule” is still a very big barrier to progress in AI.

  5. Is it really intelligence we are after or intuition?

    A self driving car can apply a set of rules to a given set of input data which results in a “course” of action.

    A person can take the same set of input data apply the same set of rules but then choose NOT to follow the calculated course of action and take a different route.

    Some hold the view that the spiritual realm is real and can augment the available information, can a computer system have the appropriate sensory input.

    At the end the day I think autonomous cars are cool but I LIKE to drive. It’s a bit like hacking or making – I did something which had an outcome. Achieving that outcome comes with a risk – the product might night work, I might make a mistake and kill myself or I might injure/kill someone else.
    We all like to live with a certain level of risk. Different people have different levels of risk and if it is mandated that we can’t have the level of risk associated with driving we will find something else to do to re-establish that level of risk in our lives.

    1. You will still be allowed to drive if you want to. In fact I think in 20 years sport driving will be much more accessible to the average consumer, and will be much more popular. I just think that it will take place in VR and on race tracks. I see it being a rich man’s sport, kind of like golf. Of course if you want to drive to the grocery store, I’m sure you will be allowed to do that too, and I will stay home reading or playing guitar while i have my car autopilot itself to the grocery store to pick up my order for me.

      1. best case scenario would be if public roads were automated territory.

        the real danger of self driving cars comes from the people not using them and insisting on driving themselves.

        1. Perhaps that would be true if you put impenetrable fences down the sides to prevent animals/people from wandering onto the road. And a roof over the top so the road would not get wet and slippery. And lights so the road is consistently lit. And heaters so that any water that might get on the road (remembering there is a roof) wont black ice on you. Oh, and there would have to be a dematerialise circuit so if your car broke down in the middle of that road, it wouldn’t cause an obstruction for other cars.

    1. I believe that’s correct. That’s partly why those are noteworthy and certainly novel. They don’t use a set of inputs, they basically iterate the (relatively small pool of) inputs to achieve the desired game state. The correlation between inputs and score are very short for games like that though. Action = result without needing to “understand” what happens between because the action happens fairly quickly (maybe a few hundred frames, most of the time.) That’s also why more “complex” games that require actual understanding, thought and attribution of chance and value are games that these particular routines do very poorly at because they lack that specific type of general purpose intuition.

    2. Thanks, I’ve used some strikethrough in the post above to make this correction.

      It’s surprising to me. The paper reflects that they are trying to mimic how lifeforms learn, however learning by example is one of the most common things humans do. I suppose it is limiting though… how can you be the first one to do something if you only know how to learn by example?

      1. This is a necessary step for AI and a good way to overcome a large technical obstacle. Modern computer code is unsuitable for advanced AI since it is too linear for high data loads found in uncontrolled field experiments. Personally, I think there is plenty of computing power but we don’t have a good language for AI to understand because we are anthropomorphizing it on our terms. We want an AI to play a symphony without practicing scales and studies show a huge disconnect between the public perception of AI and the actual AI abilities. The first real AI language will be written by an AI translating human instructions and evolving those concepts. Eventually AI adapted hardware will follow. One of the main dangers of the coming singularity is the lack of our ability to understand how these AI’s are thinking and evolving, hence the need for good base ethics in all AI, it only takes one apple to spoil the whole basket.

  6. Didn’t see it mentioned here, but one application of this tech which I would find enormously helpful: Predictive Text Messaging.

    I have been through generations of android phones and they have become much better. However, the ability of a predictive engine to learn how I use words and to do so well would be quite nice. How much would I pay for it? Probably not all that much.

    1. Not that much over potentially several billion users adds up to quite a lot. Or make it a new feature on a flagship cell phone to sell tens of millions more phones. While you, the user, might not value this very much, that’s not to say that applies universally either.

      1. Not plugging Windows phone here…. Wait yes I am. Go to a Microsoft store or AT&T store and try Swipe composition and predictive text the way it was meant to be. Marshmallow is leaps and bounds ahead of anything else android I’ve ever used, but man I miss my Lumias

  7. The problem with chess / go etc, to me at least, is they’re not something that require any kind of real “intelligence”. It’s a big statistical problem, and if you can climb up the tree of possible outcomes (or partially climb up and make an estimate on the outcome) for various moves and pick the least risky with a decent algorithm you’re going to outdo any human easily and unsurprisingly. We’re just bad at considering lots of numbers and branches and computers are exceedingly good at it. Obviously the best go players are very adept at doing this kind of thing and way better than naive algorithms that don’t have the same kind of training. I would be excited maybe 30 years ago when people did it for chess, but this just seems like the same process applied to a game that’s more complicated because the choices any particular turn are much more numerous and that’s it.

  8. “””research project surely under the umbrella of Alaphabet,”””
    Oh yes I got it. You meant “Analphabet”. Brilliant demonstration!
    Question: When will HaD use AI to compensate for the deficiency of their authors ?

  9. It’s google.. They just make the guy watch some random irritating commercial every 5 moves to make him loose concentration.
    I think I would be a good thing just to ignore anything Google does, unless it is something we need to frown upon.

  10. A lot of people are talking about intelligence, creativity and intuition of the human mind. Until we actually have a good definition of what those are, they’re meaningless words. People use to think Chess required those traits, same with Go. They also thought it would take 10 years to achieve this goal, it’s happening now.

    There’s nothing magical about the brain. Creativity and intuition are just sets of algorithms instantiated by meat.

    1. The answer depends on if AlphaGo’s behaviour is 100% deterministic, it is not then there is a 50:50 chance of one wining but you can never be sure which one will win any given game.

      1. Or maybe both disappear in a puff of logic! (or an exception error…)

        Seriously though, I would expect that somewhere in the algorithms, there should be an element of randomness which would give one system the upper hand (so not 100% deterministic), which would mean a random result, if everything else is equal.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s