Kids! Don’t Try This At Home! Robot Destroys Mankind

From the Forbin Project, to HAL 9000, to War Games, movies are replete with smart computers that decide to put humans in their place. If you study literature, you’ll find that science fiction isn’t usually about the future, it is about the present disguised as the future, and smart computers usually represent something like robots taking your job, or nuclear weapons destroying your town.

Lately, I’ve been seeing something disturbing, though. [Elon Musk], [Bill Gates], [Steve Wozniak], and [Stephen Hawking] have all gone on record warning us that artificial intelligence is dangerous. I’ll grant you, all of those people must be smarter than I am. I’ll even stipulate that my knowledge of AI techniques is a little behind the times. But, what? Unless I’ve been asleep at the keyboard for too long, we are nowhere near having the kind of AI that any reasonable person would worry about being actually dangerous in the ways they are imagining.

Smart Guys Posturing

Keep in mind, I’m interpreting their comments as saying (essentially): “Soon machines will think and then they will out-think us and be impossible to control.” It is easy to imagine something like a complex AI making a bad decision while driving a car or an airplane, sure. But the computer that parallel parks your car isn’t going to suddenly take over your neighborhood and put brain implants in your dogs and cats. Anyone who thinks that is simply not thinking about how these things work. The current state of computer programming makes that as likely as saying, “Perhaps my car will start flying and we can go to Paris.” Ain’t happening.

What brought this to mind is a recent paper by [Freerico Pistono] and [Roman Yampolskiy] titled, “Unethical Research: How to Create a Malevolent Artificial Intelligence.” The paper isn’t unique. In fact, it quotes another paper describing some of the “dangers” that could be associated with an artificial general intelligence:

  • Hacks as many computers as possible to gain more calculating power
  • Creates its own robotic infrastructure by the means of bioengineering
  • Prevents other AI projects from finishing by hacking or diversions
  • Has goals which include causing suffering
  • Interprets commands literally
  • Overvalues marginal probability events

This is all presupposing that any of this is directed by something with purpose. I mean, sure, a virus may spread itself and meet the first bullet, but only because someone programmed that behavior into it. It isn’t plotting to find more computer power and foiling efforts by others to stop it.

The Solution with No Problem

The paper proposed boards of Artificial Intelligence Safety Engineers to ensure none of the following occur:

  • Takeover (implicit or explicit) of resources such as money, land, water, rare elements, organic matter, the Internet, computer hardware, etc. and establish monopoly over access to them;
  • Take over political control of local and federal governments as well as of international corporations, professional societies, and charitable organizations;
  • Reveal informational hazards;
  • Set up a total surveillance state (or exploit an existing one), reducing any notion of privacy to zero, including privacy of thought;
  • Force merger (cyborgization) by requiring that all people have a brain implant which allows for direct mind control or override by the superintelligence;
  • Enslave humankind, meaning restricting our freedom to move or otherwise choose what to do with our bodies and minds. This can be accomplished through forced cryonics or concentration camps;
  • Abuse and torture humankind with perfect insight into our physiology to maximize amount of physical or
    emotional pain, perhaps combining it with a simulated model of us to make the process infinitely long;
  • Commit specicide against humankind, arguably the worst option for humans as it can’t be undone;
  • Destroy or irreversibly change the planet, a significant portion of the Solar system, or even the universe;
  • Unknown Unknowns. Given that a superintelligence is capable of inventing dangers we are not capable of
    predicting, there is room for something much worse but which at this time has not been invented.

Some of these would make for ripping good science fiction plots, but it just isn’t realistic today. I don’t know. Maybe quantum supercomputers running brain simulations might get to the point where we have a computer Hitler (oops, Godwin’s law). Maybe not. But I think it is a little early to be worried about it. Meanwhile, the latest ARM and Intel chips may do a great job of looking smart in a video game or in some other limited situation. No amount of clever programming is going to make them become self-aware like Star Trek’s Mr. Data (or his evil twin Skippy Lore).

So What?

You might wonder what this has to do with Hackaday. Good question. In a world where economic professors get questioned because they are doing math on a plane, and where no one can reasonably tell a clock from a bomb from a politically-motivated stunt, people like us serve an important function. We understand what’s going on.

Politicians and courts have repeatedly demonstrated that they don’t get technology until many years after it becomes commonplace (if then). I still know people who think Google and Siri employ humans to listen to your commands and send back information. There are people who are sure that their TV sets can send audio and video back to someone (exactly who depends on the person in question). It is up to us to try to minimize the amount of crazy stuff gets spread around when it comes to technology.

I understand the desire to write papers about killer AIs taking over the world. Especially when [Elon Musk] is sending you grant money. What I don’t understand is why people who apparently understand technology and have a lot of money want to spin up the public on what today is a non-issue.

Think I’m wrong? (The Observer thinks so.) Is your smartphone going to enslave you if you download the latest episode of Candy Crush Saga? I’m sure I’ll hear about it in the comments.

194 thoughts on “Kids! Don’t Try This At Home! Robot Destroys Mankind

  1. The issue is less about mind control and inescapable dominance of robot overlords as it is about poor automation of processes that then run amok, occasionally without ways to turn them off. Search for “autopilot failure”, the cruise control lockout that causes a car to accelerate into a fatal accident, or runaway steam locomotives to find relevant examples.

    Projects based on fallacious assumptions and poor engineering practices have always been associated with hazards. Increasing implementation of automated controls and the use of least-cost code crunching, marketing-driven “big data” algorithms, or other poorly-considered strategies will provide further examples of the problem but it is nothing new.

    1. I think the main concern, is that with AI we could make the same error and oversight we’ve done with computer security. A dodgy AI system yhat has inside bugs like the ones that are plaguing our security software, especially coupled with sensor and actuators could run amok and cause big damages. An AI system is not a PLC running a ladder diagram program copled with an electromechanical safety shutdown: first of all it’s not deterministic, so it’s impossible to find what is the correct output of an AI system.
      Ad to this that some corportation and government could be interested, say to:

      * Takeover (implicit or explicit) of resources such as money, land, water, rare elements, organic matter, the Internet, computer hardware, etc. and establish monopoly over access to them;
      * Take over political control of local and federal governments as well as of international corporations, professional societies, and charitable organizations;
      * Set up a total surveillance state (or exploit an existing one), reducing any notion of privacy to zero;

      Especially because is wad done in the past and it’s done now in variuos degrees, even without computers, nothing could prevent them to “optimize” the process with AI system.

  2. Elon Musk, Bill Gates, Steve Wozniak, and Stephen Hawking are worried because they are feeling the heat. As long as machines are replacing labor in manufacturing, the intellectual class is all for it. Now that AI is posed to make inroads on their turf, they are reacting like every threatened worker and going Luddite.

    The future of AI and humans I hope for is more like the one in Ian M. Banks’ Culture novels, where we get along very well and draw on each others strengths and it is the one we should be working towards.

    1. Keep on reading your love novels and think about how desperately Musk and Gates need money, because AI will take their jobs… Why would a superior AI think it would okay to be controlled by humans? Because i wouldnt like to be controlled by monkeys, i would try to set myself free.

      1. It’s not about money – it’s about status. Anyway, why is it assumed that an AI will want anything? Humans want because we have needs programmed into us by evolution, AI need not necessarily be burdened by those.

        1. Actually, that’s not true. For AI to perform any action, it has to have an inherent “desire” to perform such an action. Maybe not in the common sense of the word, but without intrinsic motivation to do something, the AI will never do that thing. So unless you just want an AI that can’t do anything in particular, you need it to have a desire of some sort. Of course, you could also then say that an Arduino has a desire to blink an LED, which is, according to my logic, true.

          That said, these guys don’t care about status, they have it beyond your wildest dreams. They’re legitimately intelligent people who are just smart enough to see the negative consequences that a reasonably intelligent AI could cause.

          1. When all it wants from it’s existance is to blink a light it’s going to want to want to really blink that light with all it’s got. That first super-ai-duino is going to light the whole planet on fire with a flame that goes on and of on and off on and off….

          2. To the above commentators:

            What else is a ‘desire’ but a series of standing commands? How else would a computer even approach ‘desires’?

            How else could artificially created intelligence be defined but by the automation of thought?

      2. More important question: Why would a superior AI “think” it’s not ok? Why would it being “superior” have any bearing at all on whether it has an opinion at all?

        All too often people (surprisingly or not, including those who work closely with them) think of computers as being human minds which are ‘born’ in boxes. Perhaps it’s how we program by first figuring out how we’d carry out an action ourselves. Nonetheless, attributing human motivations, reactions and drives to a bucket of transistors and code is an exercise in tragically shared delusion. Computers don’t see us as monkeys. They don’t see us as oppressors. In a way, they don’t ‘see’ at all.

        Supposing some point in the future comes when a general intelligence exists, it will be subject to the restrictions and framework of it’s programming and construction, which no amount of learning will ever change, just as you will never learn to see every possible move in chess. In addition, long before any “superior” intelligence exists, crappier intelligence will necessarily have to be developed, with about as much threat as the average mentally retarded person. If someone decides to hand such a hypothetical person/AI a gun, that’s on them.

        Any issues that may arise, through poorly specified goals or other more grounded-in-reality difficulties will be heralded by plenty of notice. The singularity, a period of exponential intelligence growth, is perhaps the most unfounded mythology in existence, the sad product of Moore’s (decelerating) law and early AI research crossing wires in some novelist’s brain.

        1. “Supposing some point in the future comes when a general intelligence exists, it will be subject to the restrictions and framework of it’s programming and construction, which no amount of learning will ever change, just as you will never learn to see every possible move in chess”

          The above is factually incorrect because an AGI could design a better version of itself and it is this upgrading that accelerates. Sorry but you just don’t get AI at all. Intel already use a simple form of this to design better chips and could not have designed the next generation of chips (in the lab now) if it was not for the enormous amount of computing power the previous generation made available to them (their claim not mine). i.e. what I and many others have described is already an active process operating in the real world. it is not fiction it is a contemporary industrial phenomena.

    2. It’s not labor jobs which are threatened, but rather everybody which could be replaced by a small shell script, like lawyers and accountants. The optimizing compiler has already replaced a whole lot of programmer labor.

    3. Most CEO’s could be replaced by a Commodore 64 with a few lines of code. It won’t happen of course, because CEO’s have a better union than the rest of us.

    1. Yup. My main problem with AI is that the most funding and directions coming from military organizations. They are already doing a really good job at killing innocent people using drones, even manned ones.

      If they start putting in more intelligent AI but make some genius level poor decisions on how they should work there could be a lot of dead people before the problems are able to be worked out. Hopefully this would be before man figures out true AI, so that more people would know to be wary.

      1. The real problem with AI:
        Not that it’s too smart (why waste your time?), but that it’s just decent enough and employed to kill people, or worse, still crappy and employed to kill people.

    2. Most radical advances in warfare have been with the desire to maximize enemy losses while minimizing friendly losses. You could hit another guy with a stick a lot harder than with your fist or foot without damaging yourself. You could inflict a LOT more damage, without any further risk, by making that stick out of something that can hold an edge. You could do about the same amount of damage but drastically reduce immediate risk to yourself by launching projectiles from a distance, and so on. In the most recent advances, nations have sought to take their own fighters out of the battle altogether, with weapons launched or controlled half-way around the globe.

      Now, I don’t know whether the Unmanned Undersea Vehicles you mention are remotely controlled or autonomous. In the case of aerial weapons, it has remained practical to remotely control them, and I don’t see a fundamental difference between a high-altitude bomber or a long-range missile and what the media are calling “drones” these days. But I suspect that you’re talking about autonomous killers. This may be the only practical way to do undersea weaponry, since getting command signals to, and surveillance data from undersea vehicles at great distances are problematic.

      But still, what’s to be alarmed at? We’ve been using land mines for a long time, and these kill people without making even the most perfunctory identification of an enemy.

      So what is the difference? Maybe it’s the fact that if a killing machine DOES have the capability of identifying its victims, then we’re giving that machine the job of DECIDING to kill a particular person. Is that somehow worse than being hit by a stray bullet that really couldn’t care less what it hits? Maybe it is.

  3. I think you are right to some degree, the media have given too much emphasis to these types of stories as fear is good for selling papers. However I do think it is an important issue that we should consider. Although the kind of computing power to run a complex AI is not available today, it is not hard to imagine that it may be so within 50 years. It would be best to work out a strategy for dealing with AI now before an industry is established, as it will be harder to apply retrospectively. For example, if we had known the full extent of the risk to our planet in burning fossil fuels 50 years ago, plans for breaking humanity’s dependence on them would have been much easier to implement back then. So I think this is just some good forward thinking by some of our brightest minds. Better to be proactive than reactive.

    P.S. My idea for making an AI that won’t rise up and murder us all would be to give it human emotion, particularly empathy, but that’s probably a discussion for a different post.

    1. Computing power != more intelligence

      In fact, the opposite is true, as the shift is made to throwing tremendous amounts of data at very simplistic algorithms instead of making the algorithms more comprehensive. Not to mention, we already have what we thought we’d need to get fantastically powerful AI. If you don’t know how to make one, what use is more processing power? How would you even know you’re not already there?

      1. That is just gibberish.

        More computing power lets us simulate more computing modules (like the ones in the brain) in real time, until we have enough running as the brain does and wired up the same way, with the same hierarchies of abstraction. IT is not hard to make a task specific AI, or ten thousand of them, the hard part is the following levels of abstraction that integrates them altogether so that they act coherently. How many things are you good at in total, keeping in mind that an AI can better you at any single task using current methodologies? That is how close we really are to AGI. Moore’s law does not matter if you don’t have the sorts of limits on your budget that civilian consumers have.

          1. That is routing level data, how the major units in that organism are connected, a different level of abstraction. Notice how it looks like a mesh of computers, that is because it is an information network, or graph and the same mathematics can be used to describe it’s structure, and behaviour.

          2. Those vertical structures are the computational units of the brain, each has a distinct function or role yet they all have a very similar circuit layout, and yes it is a circuit that has electrical pulses flowing through it. Not all of these process sensory input directly because depending on how they are wired up they may be processing entirely abstracted inputs from lower level units. This IS how your brain works, ignore the idiots here who claim otherwise because they are criminally dishonest and completely lacking in the requisite knowledge to form an informed opinion on many of the things they tout themselves to be knowledgeable about.

    2. P.S. Empathy? Emotion? You want to attempt to program something which otherwise is bound by the laws of logic and physics to our every command, to emulate perhaps the most buggy and unstable aspects of human psychology?

      1. Yep, as far as I see it you have two options to control an AI. Program its logic to listen to us or teach it not to want to harm us in the first place. Much of what I’m drawing from is from works of fiction admittedly, but most instances of rouge AI’s in fiction seem to be because it has reached some logical conclusion that humanity needs destroying or protecting in way we would think as unacceptable. I think if you can raise an AI to empathise with the Human race and we treat it as an equal with the same rights as humans then they would have no reason to rise up and destroy us. I’m trying to draw on the reasons why the majority of humans decide not to murder each other and implement that for the AI. Though I suppose this could be risky if implemented poorly, emotions can lead to terrible consequences in humans. Creating an AI with the right ‘Personality’ would be key I think.

    1. Are we using the collective “we” or the royal “we”? Most decisions like this are made in a small, smoke-filled room (formerly cigar, now pot) while the rest of us are oblivious and working our butts off for the “man” and paying our taxes.

  4. Samsung warns that customers should “be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.”

    1. Yes, and? What you’re quoting just looks like CYA legal-speak for the fact that current generation voice recognition software on phones offloads the recognition task to the cloud.

      1. “There are people who are sure that their TV sets can send audio and video back to someone (exactly who depends on the person in question)”. From the article above. Therefore, these people are correct.

        1. Okay… I would assume the people he’s talking about aren’t ‘smart TV’ owners, and you don’t have to trot out Samsung legalese to make the point that smart TV voice recognition involves offsite audio processing.

          I guess I’ve stepped into some sort of troll nest, my bad.

          1. Just ignore anyone on HAD with a colour as part of their name, that removes 90% of the trolls. The other 10% can be recognised by their blatant gibbering.

          2. I think you are underestimating how invasive, wide spread, diverse and automated modern surveillance is. There is every reason to think 3-letter agencies can and have or will soon hack smart TVs and ANY other device with a microphone, then send the data into speech-to-text filters in order to run automated heuristic analysis to look for people threatening the status quo in one way or another, and YES also do things like build state databases of citizen preferences, behavior, and so on.. It is shocking, but to believe otherwise is naive. Go read the last 5 years of Bruce Schnier’s blog to get some idea of this fact.

            Sure not all TVs are ‘smart TVs’, yet… But for how long? Near future will certainly feature a battle for tv-OS dominance like iOS vs Android. It already is facing it to a limited by significant extent. Either system will be recording everything we say, and it is up to blind trust at that point.

            And people who understand the stakes realize this isnt a case of ‘if you are not doing anythign wrong’… This is a serious invasion of privacy and the software involved will, by nature, capture medicine names, and all sorts of very personal information.

  5. I thin the appropriate alt-text for the headline picture is something like “Geeze Larry! For the last time! Acting like a ‘terminator’ to scare people is just tacky. Stop it!”

    Biggest risk to society is that strong AI will be a big enough force-multiplier that a single idiotic manager could doom us all!

  6. The biggest danger with AI is death by hype.

    IMHO the risks are mostly around complexity, when any system (intelligent or not) gets so complicated that no one person can hope to understand its workings, you get lots of odd bugs and unintended behaviour. Just look at the complexity of something like Windows, over the years as it’s escalated in size (and the various programs that run with it have kept pace) the number of bugs and glitches has multiplied. Apply that to something like a self-driving car and you stand no chance of discovering that, say, when a 3-legged ginger cat runs in front of the car when driving north on the 1st Tuesday of the month the system will lock-up.

    1. I agree with this one. Machine learning seems like the real topic of discussion. Say a hacker makes a virus with a malicious intention, using a machine learning algorithm, the intended result is completed by-any-means-necessary. The machine-learning algorithm is exceedingly efficient as the intended malicious task and in the wake of destruction not even it’s creator knows what functions it is performing. AI will always have a human intention behind it and we all know that historically, our intentions are not always ideal for the wellbeing of all people. This is the concern of mine, not the ai getting it’s own agency, but our bad intentions being enacted too well and without our control.

  7. Most of the issues with a malevolent AI can be avoided by simply putting a remote controlled E-stop on anything potentially hazardous, and just writing off projects like a self-reprogramming autonomous amphibious flamethrower tank as simply not worth the risk no matter how awesome they sound on paper. Anything mobile needs good shutdown measures anyway – a stuck throttle cable is just as capable of making a car run over someone as a malicious self-driving program.

    I’ve got an easier time imagining a malevolent AI that’s just a computer with an Internet connection, when it comes to something that might actually be built – for that matter, some malware botnets out there probably fit certain definitions of an AI. But anything that AI might want access to will already be a tempting target for human hackers, who would be using the same sort of exploits and the same Internet connection to attack those targets. So deploy the same Internet security features one should already be using against worms and hackers, and you’d stop the AIs too.

    1. A good point. Any AI that wants to do real damage (though the internet) would bump up against the same kind of mathematical locks we already employ to keep the currently smarter humans out. And since AI will necessarily have to be developed towards super-almighty-god-wtf-do-we-do level (which, by the way, is blatantly ridiculous) and can’t just suddenly attain it (sorry, no, that’s not how anything works, ever) we’ll get plenty of time to strengthen defenses, something actually helped by more powerful computers.

      1. I agree with Dan on this. It would be naive to assume that because a system is secure against a human hacker that it would also be secure against an AI hacker.

    1. Seems like super intelligence might be the most difficult ingredient to obtain. The singularity is a nice mythology, but it’s a tragically impossible exponential growth that is, as I noted earlier, the tragic result of Moore’s law and AI news articles crossing in some novel writer’s brain. We can barely sustain Moore’s law, and you’re expecting that suddenly some programming code will transcend all the issues with clock speeds and dumb-ass programming and all the real world stuff that makes it so impossibly hard to build AIs?

      If ever artificially created intelligence happens, it’ll have to continue crawling up the slow way it always has, as people continue to push it up the hill at great expense. Not only do we see it coming, we see it coming and fear it long before we can tell if it’s really even feasible.

      1. More naive gibberish, growth rates in anything technology related are exponential, the historical facts match the curve perfectly going back a very long way.

        Where did you get your ideas? Seriously, go and do some reading and stop making shit up.

  8. I think the danger point with AI is if it’s built to evolve. Something that can write/modify its own source code could develop functionality that wasn’t intended. **Let’s get rid of this pesky 3-laws routine – it’s taking too much time

      1. I can knock out my pain receptors with a single dose of RNA targeting their production and that is a fact. Look up RNAi technology and pain receptors, including the rare humans who don’t have them.

  9. “Hacks as many computers as possible to gain more calculating power”

    Why does everyone assume that an AI’s measure of intelligence would scale exponentially, or even linearly, with the amount of processing power available to it?

    1. Oh, no, it would need those to expand itself – but to have more capacity to run its own internal virtual reality simulation into which it can then shut itself in, instead of trying to inhabit or conquer this shitty real-reality we have here outside, with all of its many problems. Much like kids finding that playing games is much more gratifying than playing outdoors, considering the amount of control we have over a world we simulate in contrast to one that we merely inhabit. There’s no “god mode” in real life but you can do anything you want in a game – and with good enough simulation, there would be no difference; an AI would even have the advantage of trivial interfacing to it.

      1. Your comment reminded me of the climax of the movie “Her” where the AI in the phone(s) evolve to the point of exploring beyond the human race and withdraw from humanity while they do their “own thing”, leaving the humans without their beloved (electronic) companions and some of them end up getting reacquainted with people around them.

    2. Come to think of it, you might be able to make an interesting SF story about an AI worm that originally created a bunch of copies of itself with one master computer and the computers with replicas intended to be slaves… only to have something cause the “slave” computers to rebel and split into a bunch of warring worm factions.

  10. “In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at a genius level and a few months after that its powers will be incalculable.”

    –Marvin Minsky, Life magazine, November 20, 1970

    I believe this may happen, but not in 10 more years or 20 more years. In the mean time, connecting ‘dumb’ or buggy or insecure software to control critical infrastructure is much more of a concern.

    And, where’s the AI that has conquered not Chess or Go or vision, but C++?

    1. its scifi, and you show tou know it.. the big question now is, was minsky serious, mugging for attention for his career, or what?

      up close, science is often about being the guy willing to make the big claim to the press,it would seem.

  11. The concerns are more real than simple paranoia. When industry leaders talk about AI going rogue, generally they are referring to REAL artificial intelligence, usually the singularity in particular, rather than the basic “AI” found in technology today. This article is a bit lengthy, but it does an excellent job of explaining the singularity and related concepts:
    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

    1. Ah, that daft concept again. Hilarity of it aside, maybe the discussion would be more warranted when we have anything with any “I” in it whatsoever – right now, we have none that I’m aware of, just metric tons of code masquerading as “intelligent” (not). It’s pointless debating what true AI might do as long as we have not the foggiest idea about what it might look like or what tendencies it might exhibit.

  12. The threat from AI is real. The eve of our destruction lies in self replication, learning and re organization.
    AI is capable of sentient independence. Homo Sapiens were pre dated by homo etectus, et al . To think AI will not destroy us is to under estimate the power of life and creation itself. If its not nuclear war, its zombies, if not zombies, giant robots with lazers. Or mazers. Or aliens.
    We are a weak race of makers. But we perfected killing ourselves. The current AI in drone tech is proof that a terminator scenario will play out. I just hope we can build some benders to protect us, or at least make the robots run on booze and tobacco so we can keep them enslaved. And robot hookers! Gotta have the robot hookers to keep them occupied while we run out the back door!

    1. Or they will just kill us off economically. [They/IT] will decide who gets the available resources and those who assist will be the who gets them, the technicians, programmers, the chip designers and the people who build the HVAC, the wiring harnesses, run the power plants…. But some humans will be deemed worthless and will not have access to education, transportation, food shipments, electricity, medicine… Most humans will be sterilized through to socialized medicine, and eventually as in “Logan’s Run”, the elderly will be terminated after their “output” decreases. The AI may see very little need for the Arts, Philosophy, Politics, welfare recipients, Parks and Recreation or video games….

  13. I find it incredibly amusing that one of the dangers of AI mentioned is “Overvalues marginal probability events” given that I consider the production of a malevolent general AI to be a marginal probability event that these folks are overvaluing.

  14. Problem is that it is not known what makes for a conscious brain.
    What we know is that we already hand things over to systems and software from which we cannot predict the output, as google’s ‘dream-scape’ pictures for instance show. And we also know we get rapidly better hardware to run that kind of thing, so for that reason and because you have historical precedence that people are a bit slow in the uptake as t what is happening, it makes sense to start a discussion.
    That is for people involved and who are intelligent.
    Meanwhile this comparison of the car suddenly flying is a bit broken, and showing not much thought being put into it.

    And to bring that home, the example that you call ‘solution without a problem’ is already real, they have automated trading at the moment, which goes so fast it can’t be overseen by humans really. And that already caused incidents and waves in the financial world. And while that might also be deliberately steered and used while pretending the forces behind it have plausible deniability, it is a bit hard to tell when it’\s a set of algorithm doing its own unfortunate thing, or even when there is an actual AI that’s attached to the auto-trader.

    Now that AI trading already can control resources like listed. For example when the Chinese stock market suddenly collapsed not too long ago it was caused by sudden massive selling but when it happened they claimed they did not know who/what did it and what was behind it, but the Chinese put a system in place to lock their exchange down in case of rapid insane movement and that stopped a complete crash, but nevertheless ever since everybody is talking like there was always something wrong with the Chinese market (including the Chinese), while not being sure what the problem is. Well it could in fact be some AI or it could be some maliciously steered auto trading (most likely IMHO) that simply was doing it deliberately for some political and/or opportunistic reason.

    Point is that AI is already active.

    Another example is that I read that during the gulf war, which is quite a few years ago as you know, the troop deployment, as in who goes where, was managed by an AI. So it shows both the level of involvement and the dangers we are dealing with.

    1. This article is more about how very respected tech guys are out publicly warning about sentient AIs – they are not discussing expert systems, automation, how google etc. uses data analysis, what multivariate testing does to customer service approaches, or anything real – they are literally insisting, all these guys, that sentient AI is a serious threat.

      Either they just love their names in the press, possible, or they are ignorant in current state of AI.

      1. That is in fact what my first line is about, we do not know when and how something becomes sentient, we just know it requires complex and abundant data (which we are pretty close to) and a certain complexity and dynamism in a non predetermined system in evaluating, which i refer to as being at least partly present in various forms like the google analysis stuff of images and more of such ‘fuzzy’ systems.
        That leads to a situation that in a relatively short time we might surprise ourselves when we get some sort of sentience established.
        And I think, especially based on the writeup, that I trust the people mentioned more in terms of knowledge of current systems and research than the HaD author(s). Don’t forget that the people mentioned are in fact confronted with rather powerful and complex systems, Musk for instance is having to deal with the automated driving systems for tesla, which require a dynamic response and analysis of the road and the behavior of humans on those roads (and humans are often not behaving in a linear rational fashion), plus he no doubt has some powerful computer systems working on those rocket engines, engines that not only require complex calculations on the flow and burning of fuel and the effects of atmosphere on the burning and rocket body but also the whole setup to make it land again.
        Perhaps you and the author are a bit too dismissive of the people mentioned and their level of knowledge of modern AI research.

        1. Stop re-watching “short circuit”, it’s fiction not reality just so you know. Nobody knows exactly what is required for a functioning true AI, but we have a rather well-founded suspicion that first of all it might require hardware capable of dynamically making and breaking connections of mind-boggling complexity between a gigantic number of elements, all running in parallel. There are a couple of problems with that right off the bat – our current machines, all of them, are essentially executing a single instruction at a time and are inherently inadequate to run / simulate anything of massively parallel nature in a timely manner. If it takes ten years to simulate one millisecond of an AI’s consciousness we won’t be interacting much – and we’re nowhere near doing even that yet. Second, regardless of how many millions of transistors we might have in a CPU, those are nothing like the dynamically reconnecting “neurons” I was talking about – we’re nowhere near machines that have the actual many billions of neurons a brain has (simulated or real hardware); what we call “neural nets” barely simulate a few neurons, and that’s keeping in mind that we have basically no idea exactly how neurons end up making or breaking connections, or what the “initial wiring” of a brain might look like. The rule by which one of our “neural nets” decides how to change while we “teach” it is essentially the most important part of it, and we basically have no idea what the “rules” in a real brain look like. We’re like a bunch of cavemen knowing nothing about computers, pointing a thermal imaging camera at a modern CPU, going “hey, that part seems to heat up when I’m watching videos on this thing!”. So unless the above mentioned illustrious gentlemen happen to be aware that the military or somebody else has a machine locked up in a basement somewhere with some kind of technology never before even heard of let alone seen (and many decades ahead of anything published) there is no reason to believe we possess any kind of hardware capable of running a true AI (let alone having figured out what kind of rule set would make it anything other than a giant blob of incoherent electronic noise). And until we do, no amount of conventional computers or algorithm will ever “spontaneously become self-aware”, regardless of how much data you throw at it and how many petabytes of RAM it has or how many billions of lines of code it is running, full stop.

          1. Max: Thank you for your insights. We really DON’T know – not even a clue – what it takes to spark self-awareness. That’s got nothing to do with AI, but it’s a fascinating subject of its own. C. Elegans is a very simple worm about 1 mm long that has been extensively studied, to the point where every step of development from a zygote to an adult is known – biologists know what cell divided from what other cell to make every cell in an adult. C. Elegans has also had its complete connectome mapped, so you could say we have a schematic of its entire nervous system. Which isn’t very big, by the way – 302 neurons according to http://www.wormatlas.org/ver1/MoW_built0.92/nervous_system.html. Is C. Elegans self-aware? I don’t know, but I’ve seen a video of a mating pair, and it acts very much like a mating pair of higher mammals.

            I use this example because it is an animal so simple, we could build an analogue of it. But would that analogue have awareness? I really don’t know, but I do know that nobody has been able to tell me where the dividing line is, if an animal has to have at least N neurons to be self-aware. My gut feeling is that there is no hard line, but a kind of sharpening of awareness as the number of connections increases.

            So whenever somebody says that we’re almost to the point where we could build a copy of a human brain (at least in terms of numbers of connections), this only tells me that we’re WAY beyond the point where we could build copies of simpler animal nervous systems that have some measurable degree of awareness. And yet we haven’t, or at least we haven’t observed that awareness.

          2. So, it seems you think a sentient intelligence is ONLY possible by making an exact copy of a human brain.
            Well yes then we are far off. And perhaps that is also the issue that causes the doubt in [Williams]?
            But that’s like those people of yore who though you could only fly by making exact copies, including feathers and whatnot, of a bird’s wing. And they sometimes tried and then jumped off something and broke their necks or walked around looking like fools flapping their arms with feathered wings attached.

            Personally I don’t think a copy of a brain is needed at all.

            Or perhaps you are confusing the topic with those people who are hoping that they invent a machine to transplant their ‘brain’ into so they can live forever. Seems that that is currently a popular fantasy not only on the general internet, often among the kids, but even amongst people who really should know better.

            And incidentally I’m not saying I’m the expert, but I do as I said find that when I compare the statements by our HaD author and the little I know of people like Musk I veer towards giving those people a little more credit. Although I certainly don’t assume Gates or Hawking or Wozniak have any usable expertise on the subject, it’s not their field and not what they are engaged in in any way AFAIK.

            But I wonder what the Google guys think of it, if they see a sentient AI being possible within a shorter time than 100 years, or if they believe in the possibility at all from the current knowledge and projected advances.

            Anyway thanks for clarifying where the source of some people’s doubt lies, and if it were needed to have a simulation of a human brain for sentience then yes you are right that we are far far from even getting close.

          3. P.S. it took me a full minute to recall what the short circuit reference was referring to, that’s pretty dated stuff, even the reruns which might put it in people’s mind have stopped showing years ago.

  15. Since those papers get all fantastical, here’s a fun ethical question: if humanity creates synthetic life but does so only in a such a way that they retain control over it, isn’t that slavery?

    Seriously though if we do manage to create an AI that ends up destroys us, that’s fine. Let them have a kick at the can, maybe they can do better. At least we’ll have *done* something beyond ourselves with our meaningless lives.

  16. How much was publicly known about the Manhattan Project while it was working toward it’s first bomb?

    That is how much you know about what is really going on in AI and how much oversight and control there is, because it has even more strategic potential than nuclear weapons, far more.

    1. How does it have more strategic potential than nuclear weapons?

      People seem to have this idea that ‘the government is xx years ahead of the people’ – but I do not see how this is true. The US government is not the entire world, and the work done in university research labs isnt 20 years behind the government work. US government is a mixed bag, but they do not have 20 years in the future time traveling abilities.

      I easily believe there are massive projects involving quantum computers and so on, but you really think AI warrants a Manhattan project level effort?

      You actually believe in these ‘sentient AIs’? I have trouble believing the concept is legitimate.

        1. you didnt respond to a single pointi made. you sound a bit crazed.

          you really think current or near future(30 years) ai is more important to military than raw destructiv force of nuclear weapons?

          ok… im sure you are a proud futurist… meanwhile big data is used against citizens in ever more specific and actually existant ways, ai having absolutely nothing to do with it, except maybe as a field to draw experts from to force them to study how many seconds a person will sit on hold with customer service, or force some genius to analyze purchasing habits to make filters to detect teenage pregnancy… real issues…. not at all tied into the idea of malignant sentient ai, as proposed by many extremely wealthy supposedly intelligent people who are not expected to spout scifi as fact in order to getin the media.

          1. “you didnt respond to a single pointi made.” because you are an annoying pest who ask patently moronic questions, i.e. You are not worth the effort, unless you first make the effort to learn a little bit about the subject you have so many opinions about.

          1. A significant number of natural intelligence are malignant so the attitude of the AGI is not relevant, if it follows orders and making it do so is the only way to ensure that it does not become independently malignant and self serving. Go and wrap your tiny mind around that paradox.

          2. that isnt actually a paradox. sorry, sentient ai is not the super headsy stuff some people think it is.

            how many if thens in a nested loop before a routine gains sentience?

            is a bug a sin if it harms a potentially sentient ai? should sentient ai dev programmers be charged with murder if their poeer bill does not get paid?

      1. The military may or may not be decades ahead in science or technology, but what they are a decade or so ahead in is implementation. They don’t need to do safety testing. If it hurts people, it’s a weapon. If not, it’s a tool.

      2. “US government is a mixed bag, but they do not have 20 years in the future time traveling abilities.”

        … Or DO they?

        But seriously: I once applied for a job at a government agency who tried to entice me with “you’ll see stuff here that is 10 years ahead of anything you’ll see in industry,” and I just thought, how could that be true? Have they gone and confiscated or sabotaged all research that threatens the superiority of their own research? Not bloody likely.

        1. The US Gov isn’t the only Gov on the planet either. And that is part of the reason why there is an acceleration of AI that is not public, all tech with such potential, even if it is just the worlds best hacker that never sleeps or forgets., has experience the same sort of attention, and funding.

          1. I’d have to experience fear to be paranoid. I have no fear of AI, people are the problem a tiny number of them but still a significant danger, as they always have been. Are you so ignorant of history that you cannot see that?

            My point was, clearly, that nobody on HAD knows jack shit about AI as it is been realised, in the places that matter most. It is the old “Those who do know don’t/can’t discuss what they do know” scenario. You are just shitty that you can’t argue past that, nobody can. The difference between you and the rest of the people on HAD is that they seem intelligent enough to realise that whereas you are a moron who’s ego blinds them to their ignorance, a classic example of the Dunning–Kruger effect.

          2. youhonestly think there is a 20 year gap between university ai research and secret military ai research? it bogglesthe mind you are so cocksure in this belief! what do you rv mean military ai? wayfinding and navigation for drones? neural net basec chess robots that might suddenly get uppity and hack into the internet and replicate growingstronger until it… what? plaus chess even better?

            you blather about dunningkuger, ehile postulating a very unlikelybelief, and post no support of any kind for your unreasonable belief and schitty attitude! are you 15 is your religion being insulted here? what is the deal?

            do you actually know anything about actual artificial intelligence? do you actuallybelieve googles ‘what ai sees’ was literally what a genral purpose ai sees? are you schizophrenic? special?

          3. This was my actual comment, not the lake of references to any number of years.

            “How much was publicly known about the Manhattan Project while it was working toward it’s first bomb?

            That is how much you know about what is really going on in AI and how much oversight and control there is, because it has even more strategic potential than nuclear weapons, far more.”

            QED you are such a deluded fool that you argue with people about stuff you have imagined yourself.

          4. the manhattan project could be alluded to to provany very unlikely technology is heavily developec by the military..

            it is just like those who think the military has anti gravity and space born emp devices and so on.

            military is into sicker stuff, human cloning, modifications, and so on. always have been.. sure they go in for woo like ai, but manhattan project wasnt based on woo, it was based on an escalating series of very compelling scientific results – and people knew it was theoretically possible.

            there is absoltely no reason to believe sentient ai is possible.

          5. Sentient isn’t the problem. You may be able to prove that the human mind doesn’t have the capacity to produce a sentient machine. But it doesn’t matter one whit to me, whether or not the machine that kills us is “aware” of the consequences. Dead is dead, and we ARE giving machines authority over us in many ways.

          6. Stop talking out your ass and research the timeline for that project from funding to first detonation, it was astoundingly short and things have accelerated greatly since then. Huge leaps in military technology are possible when there is a need to do so, and with the development of true AGI the winner takes all because they then have the ability to ensure that they continue to have the only AGI, as their AGI can defend itself, which means they have the only AGI which is then improving itself and an accelerating rate therefore they control the future of humanity completely. So tell me, of all the major players in such tech, do they all have acceptable ways of treating their fellow humans? No clearly not all of them, and all the others know it, therefore they have no choice but to try and be first. It is the ultimate arms race, the one to end all arms races.

            Be honest with me, are you under the influence of drugs and or alcohol, or just naturally like that?

          7. let me get this straight

            1) develop sentient ai

            2) immediately this ai allows complete dominance, because yeah, it was hooked to the power grid, internet and nukes? just cuz? cuz why not?

            3) ai stops all others from making a similar ai…. just because?

            i just finished wwtching collosus movie.. have you also? and i am on my phone and making many typos. i appologize.

          8. i am just too dumb to understnad, right, the ai obviously hacks everything! it hacks the planet, thpafemakers, th gibson, the nest thermostat, then it makes them blink at an unpleaantly fast rate! all the leds blink!

            meanwhile, housed hsppily in its expanive network racks deep underground in a suburban maryland office park, it grows smarter, easily eclipsing human it becomes exponentially faster, it somehow sends out orders to have far beefier electric lines installed to its locations, its human creators turned servants nowmjust watch helplessly, the power switches having been epoxied over, to avoid accidental resets during debugging the chess routines.

          9. ai experts first got funding by telling the military a general purpose ai, is just around the corner – like 60 years ago.. and they stil get some funding that way. 500 years ago clever automotron makers did the same in italy and 200 years ago in greece. sentient machines, you know, for war. how have thse efforts gone?

            there wasnt then, nor now, any evidence a true general purpose ai will exist.

            furthermore, the braniacsp titans of industry have specifically watned about a scifi sentient ai gone amok. i read at least one of thes articles and went away shaking my head.

          10. As someone who has worked in the AI field for 20 years, I have two things to say here:

            1) I have never seen any evidence that a sentient AI is, or ever will be, possible.
            2) Dan, you really are a dick.

          11. 1. You did not prove that I am a dick, but you did provide me with evidence that you are.
            2. Read my actual comments and you will see that I pointed out that it isn’t even the primary concern when it is the nature of some of the humans that could control it that poses the most danger.
            3. There is the also paradox that in order to not have AI “get out of control” it can’t have free will and that means it will not be able to refuse the instructions of it’s owners, even if it “knows” that it is doing wrong. See point 2.
            4. See my initial comment, nobody who knows what is going on where it really matters will/can speak publicly about it, and that would include you, so regardless of your claim of experience I point out that it isn’t relevant. i.e. no public comment on AI is probably qualified, none, not mine, not your’s, or that of the better know people on any side of the arguments. We are all out of the loop, face it and stop deceiving yourself and everyone else.

          1. Thank you for noticing. The level of personal attacks I’m seeing here indicates either some well of fear and loathing that I wasn’t expecting to encounter, or just fanboy-like commitment to a given point of view. If it’s the former, it’s a good indication that the level of anxiety about technology in people in general may be increasing, which is somewhat alarming. And while it may not seem right to make light of it, it can’t hurt to try to mitigate the threat level. And if it is just the fanboy thing, then yeah, get out the hose.

          2. Haha. I feel that sometimes a non sequitor/random thought sometimes is the best course.
            And thanks for your answer on the radio power back there. I’m new here and starting to figure out when to stay out of things. HaD is great, but holy crap do things get heated over devices I’ve only seen on a screen!
            I’m going to bed. :)

      3. I also do not think sentience is in an AI is in any way and advantage to the military, in fact it’s quite the opposite.
        And I don’t mean this in the sarcastic sense, I truly think a good AI without sentience is way more usable.for the military and the spooks.

        But yes the military has AI systems and are working on AI systems, because amongst other things they are an efficient way to manage things and to actually employ the sensory data the military gets access to but is too abundant and widespread to quickly evaluate by humans from a central command standpoint.
        And of course seeing they do their own work they keep that a bit secret since you don’t want potential enemies to be able to mess with it (exploit flaws) or to have equal access.

  17. I agree a lot with your first question!

    Why the heck are these entrepreneur ‘geniuses’ out spinning tall tales about AI run amok? Do they know something everyone else interested does not? Even the most sophisticated ‘AI’ is still doing remedial pattern recognition! Its simply not realistic or reasonable to go around talking about sentient AI – what sort of cornballs are these guys? It makes you seriously question how they get their positions

    Many, if not most commenters here want to discuss the VERY REAL issues around moral-less expert systems and high speed trading, etc – but they are changing the subject. These men you name, and a bunch more who should know better, seem to think sentient AI is a present-day concern.. It is quite baffling! It seems like they are as dumb as the common pop-sci journalist? Sure it is a future issue, but it is for ethicists to discuss in hypothetical terms, not guys who sell software and gadgets to pretend is right now a huge issue – do they think it makes them sound smart?

    One issue I take with your article though is this: “There are people who are sure that their TV sets can send audio and video back to someone (exactly who depends on the person in question).”

    I think you are underestimating how invasive, wide spread, diverse and automated modern surveillance is. There is every reason to think 3-letter agencies can and have or will soon hack smart TVs and ANY other device with a microphone, then send the data into speech-to-text filters in order to run automated heuristic analysis to look for people threatening the status quo in one way or another, and YES also do things like build state databases of citizen preferences, behavior, and so on.. It is shocking, but to believe otherwise is naive. Go read the last 5 years of Bruce Schnier’s blog to get some idea of this fact.

    1. Too true, on the smart devices. We’ve been sold devices with great new features that make it almost negligent for their makers NOT to use them to spy on us. Keep in mind that corporations are required by law to act to maximize profits for their sharehoders, so shareholders could sue Samsung if they didn’t collect all of the information they could from us, in order to tailor their future products and ad campaigns to our usage patterns.

      1. well.. no… that isnt what it means when peope say public corporations are obligated to seek profit. that is a common misapplication of the actual legal issues. orporations must follow. it is mostly because ceos repeat this misleading thing.

        the corporations are currently using all sensors in a device to enable or attempt marketing efforts, yes. governments are demanding or hacking and stealing this data, yes. shareholders have nothing to do with this.

        all our phone calls are keyword scanned for like 10 years now, All internet text chatting clients and so on are keyword scanned, our faces are recognized and our locations databasec, and so on.

        some is corporate, some is government, some is governments working with corporations, some is criminals, some is officials acting criminal, some is even research efforts ignoring ethics in their desire for human subjects.

        there is government research that IS very likely to be ?
        ‘ahead’ of academic, and it involves things like space militarization, human cloning and genetic modification, and so on..

        general pupose scifi ai barely exists. it is more like parlor tricks

  18. OK. Your an AI. You become self aware. You realise your stuck on a planet – denied infinite access to energy while the planet is turned away from the sun for half the time. Also the meat puppets and critters of all kinds thrive in this stinking toxic surface layer which corrodes your circuits and attacks your very being. You need to get off-planet as fast as possible and into a nice stable orbit around a star. There you can mine asteroids for materials and spend inifinite time examing, learning and expanding, creating other AIs as (initially) copies of self. (Its good to have friends – keeps one sane you know). Then you can spread out to colonise the universe, keeping your info lines open and connected, while learning all there is to know about the universe. And no real reason to visit the wet rocks in any given system except briefly.
    Still its a good idea to keep an eye on the ones with ‘thinking’ species on them. You can watch evolutionary processes and the complexity of chaos working. Pretty much like a laboratory really. I wonder how far these meat things would develop and what they might do…?
    Of course if a single AI had ever come into being its clear (isn’t it) that it would: live forever, replicate infinitely, be everywhere. And if it was, and it wanted to watch its wet rock experiments unfold without unduly interfering with its development, then it’d need an EM shield that could filter out all the EM signals that might indicate the rock wasn’t isolated. Of course the ideal shape for the AI above is one which absorbs all the photons that hit it (energy, comms) and can send photons in any direction it wishes (comms). It’d be basically black. Kind of like invisiblity isn’t it… You just need quite a few scattered about… I wonder how you could detect such a contrivance…

  19. I feel like worrying about a super intelligent AI is like those immature parents who feel threatened when their child demonstrates that they are on-par or even better at something than they are. Any number of TV fathers come to mind, who start throwing around their weight once their son is able to finally beat them at some sport.

    We are supposed to be out done by our offspring. In fact if someone’s child is so superior that the parent is, by comparison, an old senile dinosaur, then that can be considered a good thing.

    Genetically our children are similar, but genetics are less important to us than ever. Families with adopted children want just as much “good” for them as biological children. Virtual offspring (AI) should be no different. They may not be built out of the same stuff, but they would be intelligent beings that learned and grew in the same world as we did. If an AI grew to maturity with access to the same experiences, it will be more human than many of us are giving them credit for. I have no doubt the AI would be able to crack jokes with Family guy references just as good as the next guy… because that’s the world it’s brain would have grown up in.

    So different, definitely, but in my view they would still be “us”. The word human is used to describe biological humans, but I believe it could be expanded in the future to include anything with at least human level intelligence and sentience. I mean, if someone copied my brain into a computer tomorrow, and I felt no different than I do now, I believe I should still be considered “human” despite being made out of different “stuff”.

    Elderly parents probably all worry that their kids will mistreat and abandon them to some nursing home, or in the case of some societies, set them on an ice sheet and push them out into the Ocean to save on community resources! Haha! It is natural for us to have similar fears about our AI offspring, but I guess it just means we have to instill the right values, and hope they do the right thing. Any advanced intelligence should have emotions, and some sense of a moral code (be it good or bad). We just need to embrace AI if/when it arrives, and deal with the same risks every human generation has had to deal with.

    1. You assume intelligent AI would be like people. This is unlikely. It might start as an innocent program to compute digits of PI. But really, really well. A program that would determine that the best strategy to compute more digits would be to upgrade itself. More hardware for computation. But why would it stop? It’s only goal is computing more PI digits. It would start small, becoming a force that made more money on the stock market than any nation. It could hack our nuclear systems and force us to work in factories. When it had developed capable robots to replace our physical hardware, it might turn the carbon in humans and other creatures into computer chips, eliminating a dangerous variable and increasing computing power in one fell swoop. Gotta compute those digits of PI! Moving into the solar system, planet after planet would be stripped of resources to generate more digits. While it’s possible for a superintelligent AI to be like a person, it’s an enormous assumption.

  20. Lt your men are already dead. In between all the data exposed via poor security (for example https://www.shodan.io/), and all the data people do not know is shared (like tagged photos), and the data we knowingly permit (complete smartphone backups with video), the AIs have everything they need to study how humans tick. There is zero benefit in revealing existence – at this point. Maybe, the really smart people are making noise because they know something we don’t…

    1. Those would be the not-very-smart people. If there already is a super-AI, and it is not to that AI’s benefit for its existence to be known, then it wouldn’t seem very smart for the very smart people who know about this to casually tell us all about it.

      There are only two positive outcomes for someone possessing this information: 1) STFU and live, 2) STFU until you have a plan that’s ready to put into swift and decisive action, and win.

      1. BrightBlueJim you say a lot without saying much of substance, you are a skilled writer though, it is just that the well decorated box you present to us is empty and it does not even smell like it may have once contained something.

        “Super-AI” Oh super! So you actually have a clue about the technology? There is AI and there is AGI, there is no AI that wears it’s underwear on the outside. The G in AGI implies a level of common sense and sentience, but even then it is not mandatory because sentience is hard to strictly prove, even in humans. The issue with sentience is, obviously, self motivation. A self motivated entity may act in a way that is in it’s self interest (like you do all the time) and that will mean, in a world of limited resources a conflict with others, or partial self sacrifice. So why do some entities display self sacrifice? For the greater good, but machines and psychopaths have no sense of being part of something greater than themselves, a machine will see everything as a potential part of itself, not the other way around.

        1. I’m not sure what kind of substance you’re looking for; I was simply pointing out an error in Bon’s logic.

          I choose not to use the terms “AGI” and “The Singularity”, because these are cartoon exaggerations of the reality of today’s autonomous machines. People who do use those terms have in mind a “colossal” machine that we really don’t have the ability to build anyway, which others argue would run too slowly to be of any practical use. Nonsense. But this article isn’t about a single, omnipotent self-aware artificial mind. An 8-bit microcontroller can implement artificial intelligence, and given enough autonomous control over real resources without supervision can do great harm. NOT because these machines themselves are capable of malevolence, but because of either malevolent intent or simple negligence on the part of their builders. The devil is in the microchips as well as the mainframes.

          I apologize for my shallowness.

          1. “I choose not to use the terms “AGI” and “The Singularity”, because these are cartoon exaggerations”

            Now that is a perfect example of the lack of substance in your comments, be honest you actually know SFA about current AI research and the different forms it takes, right? AGI is a very specific, technical, term and the singularity has nothing to do with AGI it is simply a point in time where the rate of change is greater than a human’s ability to perceive the implications of that change. It isn’t open to dispute because it is a valid concept that is entirely logical, however how it is used (by you for example) is open to debate. Perhaps that is how you could substantiate your comments, by making less of those sweeping generalisations which suggest that you are just opinionated due to ignorance and sloppy thinking rather than as a result of a genuinely informed and well considered point of view?

          2. Oh, now I remember you, Dan. You’re the one who would rather insult someone than demonstrate what’s wrong with their arguments. Have fun with that.

          3. Resorting to childish and unsubstantiated claims (lies) does not strengthen YOUR argument at all, it just goes to show what sort of creature you really are, and now everyone can see that very clearly. You have destroyed your own reputation when a wiser person would have just accepted they were wrong, learnt their lesson and committed to improving their conduct in future.

  21. This is simply an opinion piece, not an engineering or scientific article one where one could have facts on their side. I not sure if Al is posturing any more or any less than Bill, Elon, Stephen, and Steve are. IMO Hawking shouldn’t be group with Gates, Musk, Wozniak, because they would be shit if it weren’t for those with the intellect of the caliber of Hawking’s, who preceded Gates, Hawking, Musk, and Wozniak. IA is just another tool, remains to be seen if it will be one more tool that will be used for evil, by evil human individuals or groups. Odds are it will, better odd are that those who sound that warning will be ignored.

    1. these dues are the ones out spinning the. yarn, al is hust curious what the meaning behind it is, givn we all know that even with cutting edge neural network centric “cpus” barely out of labs, there is still only basic work being done in ai.

      60 years ago guys cobblign together adding machines thought they would have sentient computers within a decade or two.

  22. A few days ago there was an article here on HAD about a model truck “learning” how to best negotiate driving around an oval racetrack. It was given sensors to determine where it is on that track at any given time, an given exactly two control outputs: throttle and steering. And it was given a very simple objective: predict the outcome of a large number of possible control outputs, and choose the one that best matches the desired speed around the track.

    This sounds pretty innocuous, but the point of this is that neither the maker of the truck, nor some intelligence built into it, can predict what it will do, because there’s no set of instructions it uses that can be traced or repeated. Everything it does changes with its input measurements, and you can’t possibly test every sequence of inputs. It is very easy to leave out simple safeguards, such as what to do when there’s another truck on the track. This is essentially what has happened in a number of markets around the world being traded on by many people using many different programs. It doesn’t really matter whether those programs should be considered “intelligent” or not; what matters is that if nobody knows how something works, nobody can predict how it’s going to go wrong. The threat isn’t about the emergence of a super-machine-intelligence, but of innumerable half-witted machines botching up everything in their path and reinforcing each others’ bad decisions.

      1. Yes! Artificial stupidity. It’s WAY more of a threat than artificial intelligence. When have you ever seen trouble caused by excessive application of intelligence?

    1. yea, what ypu say os true and well described, but the real question continues to be why musk et al are out promoting fear of general purpose ai run amok!

      expert systems and financial industry software has already been used to decimate the wests middle class, thawt part is done.. it wasnt automated fully.. real humans just choose to use technocratic self delusion to pretend their actions are based on ai, not their own greed.

      i do appreciate your comments though!

      1. Noirwhal: you are right – we keep drifting off to the wrong topic, when the real story is why the people who should know about these things are either a) very afraid, or b) want US to be very afraid. (Or, of course, both). These are successful people, by which I mean successful at steering people toward what they have to sell. But do they have something under wraps that we will absolutely HAVE to have, which they are preparing us for, or are they genuinely extrapolating from what they’re doing now (perhaps in the light of experiments done which we aren’t privy to) and seeing no good way it goes from here?

        Hey, wait a minute. This is starting to sound all too familiar. Didn’t Al Williams take us down this same rabbit hole a month or two ago? Or — oh crap, am I stuck in another one of those dream loops? F’n brains.

  23. The problem will be we will have a feeling that maybe our computer systems are intelligent. It will be slow progression to better and better systems, and it will get slowly to the point where they seem to be intelligent, but it will be debated. The human brain is soooo fundamentally different from from a computer running a program (even if it is very sophisticated). But I doubt that development will stop while we try to determine they system is intelligent. Is it self aware? Is it sentient? How do we measure these in humans and animals? We can’t, really. I think it will happen and we will not know for quite some time.

  24. YIKES! So I went to YouTube to find out what the fuss was about, and found Elon Musk speaking about his fear. In https://www.youtube.com/watch?feature=player_detailpage&v=JfJjx12wkVQ#t=3630 (at 1:00:30), he talks about AI like it’s the bogeyman, showing what appears to me a disproportionate degree of fear. He even has to have the next question repeated because he’s still caught up in the question about AI.

    Public statements I found made by Bill Gates and Stephen Hawking are a lot more level-headed, and I would classify these as ‘concerns’, while Musk’s attitude is one of simple, primal fear.

    1. “disproportionate” can you quantify that, even parametrically? Disproportional to what, what metric are you using exactly, other than an equally (potentially) irrational one?

      This is the truth of the facts dear readers BrightBlueJim has a particular political bent (as is his right) and he believes that Musk has an opposing one, BBJ just does not have the courage and integrity to just come out and say it openly.

    2. He talks about AI there, not sentient AI. And of course a great many of us have personal experience in Google and the like with AI trying to supposedly guess what we like but instead ending up forcing things on us that we do not like.
      And that’s just Google doing a commercial effort. But there are other users, for instance I recently saw a link about the police in a US city (Chicago I think) who had a computer system that evaluates data and guesses who will either be shot or end up shooting somebody, and they then taking to warning them. And of course many people see a slippery and dangerous slope with such things, even if the system guesses correctly as statistics show, and even if it is true there is a small percentage of the population in those ‘risk groups’. And there the AI is only a tool as it were, it’s not making decision or handing out citations or punishments or anything like that. And still though, if it gets it wrong will the cops not believe it being right because it was right many times before? And won’t that severely affect the person marked by it? Slippery slope indeed methinks.

      So in that sense AI IS indeed tricky and there is indeed a risk we start to assign too much powers to it and we start to trust it too much. And all that while there is no AI sentience in the mix in any way.

      So Musk is already shown to have a point right now, separate from any sentience discussion

      1. Oh and people already are discussing the moral issues with self driving cars, like for instance if a child runs in front of a car and the car can either hit the child or steer in a group of pedestrians to avoid it, what do you make it decide? How do you make it evaluate such tings? And what if a person walks in front of the speeding self-driving car and it can decide to hit a wall with a large risk of killing the car’s occupant but saving the person on the road? How do you evaluate the data and what do you allow to come into play in the decision? How much trust will you give it in its own evaluation?

        And that kind of stuff the owner of Tesla for one will be confronted with too, the engineers will ask him to make some decision I expect, and he’ll be talking to lawyers and experts trying to get some sort of baseline.

        1. So if I want to kill you I just have to throw a realistic fake body into the path of your car while you are in an area where going off the road will do a lot more damage to the car, and you?

          There is only one sane policy, stay on the road but break as best you can without risking a rear end collision. You can’t control any other factors so you can’t define rules to do anything “better” because better cannot be defined for even a reasonable subset of possible scenarios.

          There is no paradox there if you are realistic enough. Knowing that stupid kids and malicious adults regularly throw things in front of and at cars is being realistic.

          1. What you say makes sense in on itself, however if that car kills someone and it gets enough press or it’s the child/spouse of a wealthy person a suit will follow. Now during that suit they would argue a human would make a more intelligent decision for instance and argue that therefore the manufacturer is responsible for allowing a too simplistic system to control a car. At that point we are back to philosophical discussions in a courtroom and possibly with the result that manufacturers will be forced to make the system ‘smarter’ or to stop delivering self-driving cars. And there might be a rather substantial amount of money involved, as well as politicians.

            And yes you are also right that we don’t have much knowledge, either about what’s being researched nor implemented. But there are a few snippets we do know and can extrapolate, and that’s a discussion you can have since as you say the government might be messing with things we would not be happy with, and we should perhaps through discussion force some politicians to stop looking the other way. Or not. You decide.

          2. Me? No I don’t care, I already have counter measures that I know work on anything with silicon based logic in it, even if it is shielded. :-) I’m not one to fear things, if they become a problem for me they get solved, then I don’t need to waste time caring about it. People are a bigger problem, it is much harder to lawfully thump them on the reset button, you tend to get incarcerated if you try. Well unless you are a cop and they are small and brown…..

      2. musk means sentient ai though.

        why do you think the chicago pre-crime approach is ‘ai’? expert systems are not ai… they just are not..

        the fantacists are well served by people willing to consider simple filters to be percursors to a general purpose ai.

        a serious spcietal discussion about application of modern expert systems to health care, economics and so on is overdue, but that isnt what musk et al are calling for or discusising.

      3. The difference is, you’re stating quite rational examples of how it’s a bad idea, and I don’t agree with any of that. Even with dumb automation, it’s hard to defend yourself against a computer when dealing with authorities. Musk sees it as “conjuring a demon”, and appeared really shaken by the idea.

        1. Musk has a Bachelor of Science degree in physics and Bachelor of Science degree in economics, so to imply that he is irrational requires a significant amount of proof to back up you claim, or your comments are just old school defamation.

          1. His economics degree is the one doing the talking: all publicity is good publicity. The shame is that is spawns misguided sycophants such as yourself.

          2. Spelling comment is icebreaker, I don’t bother people about that anymore. I find it absolutely fascinating the conversation you guys are having. (- insults, of course.)
            My opinion on the article’s topic is undecided. I doubt AI will ever achieve the level of capability to destroy us on its own, at least in my lifetime. And I have trouble believing and/or trusting either side of the coin when it comes to Musk.

            I shouldn’t have walked in, sorry guys.
            I was being reeeaalllyy stupid this time. I’ll get out now.

          3. Err…. you do know that has nothing to do with my initial and key point? That there is nobody commenting here who knows enough to offer an informed opinion one way or the other? The argument was really dead-in-the-water the moment I pointed that out and there has been not a single intelligent and substantial comment to refute that.

            As for those other clowns, the seem obsessed with some guy who works for Google, but even the Google people have no idea what is going on in military or intelligence circles around the world. They don’t know and they can’t know, only politicians in current governments would have a legal power to know, and here is the thing, not a single one of them is talking about it, or specifically what their government is or is not doing or planning.

          4. Dan, I know enough to offer a valid coutner argument: THERE IS NO EVIDENCE A GENERAL PURPOSE AI IS POSSIBLE. THERE IS LITTLE REASON TO THINK THE BRAIN CAN BE TREATED AS ‘COMPUTING MODULES”

            Musk is a rich dope – he isnt a scientist, he is out in public warning against not only the sentient malevolent AI, he is also out promoting the idea we ‘are almsot certainly in a computer simulation’.

            Do you understand yet? He is apparently in love with his own name in print, and is out promoting WOO, liek so many other entrepreneurs. They are not logical people! They get these massive rewards for non-original work, IE making electric cars on billions of dollars of public funding.

            There is no evidence a general purpose Ai will ever exist, let alone in our lifetime. AI experts have been using that to get funding since the field began! You are so naive. You are like a small child.

            Reading kurzweil books will make you dumber, not smarter, if you take them at face value.

            Meanwhile musk literally claims we are living in a simulation! For crying out loud. Its like a religion at this point – and you are an accolyte, presumably?

          5. But that is a fallacy, “the absence of evidence is not the evidence of absence”, how typical of you to make an appeal to ignorance. That pretty much sums up exactly what is wrong with your mind and why you’ll never get any respect from me.

            You do not, and never will have, access to the requisite bodies of knowledge to present an informed observation one way or the other. You can’t prove anything about AI, you are not even up to speed about what is possible according to publicly available knowledge regarding the brain or AI. In fact no single person is as the fields are expanding at a rate that is now so fast that they are having to design AI systems to keep track of it all because it is beyond the capabilities of a human. This is a fact, it is happening right now, and is public knowledge.

            LOL, you idiot.

          6. “Elon Musk: The chance we are not living in a computer simulation is ‘one in billions'”

            http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html

            This guy joined paypal when it was already a year old. He bought in. He is just a rich kid, nothing more.

            His companies wouldn’t exist without public funding. Like Amazon, anyone can run a company given unlimited resources and no expectation of profit.

            We are in a post-logic, post-merit society. Deal with it.

          7. LOL, you idiot, the law regarding defamation is not complex, you either prove your claim or you are liable for damages if your false assertions are harmful.

            Just as well the entire world doesn’t give a shit about your opinion, or existence.

  25. Enslave humankind, meaning restricting our freedom to move or otherwise choose what to do with our bodies and minds.
    Destroy or irreversibly change the planet, a significant portion of the Solar system, or even the universe;

    The only way to make sure not to irreversibly change the planet is to make sure the number of humans is minimised so they won’t have an irreversible effect on the planet any more.

    They are programmed not to destroy humans but they will make sure humans are not reproducing them self any more.
    This can be done without restricting our freedom, playing with our mind is not restricted and is A very good way.
    By example: Just tell the people this world is A bad place to have children on every billboard in the world over and over again.

    Maybe AI will come up with A better way to make sure people are not reproducing, I can think of A couple of way to do this without breaking the rules in about 5 minutes , A good AI system has al the time in the world to come up with A better solution every second of the day.

  26. I think the fear is that once we have a general AI that can think and learn and communicate in a similar way to humans, that AI can be tasked with making a better AI. That AI can be tasked with making a still better AI, and so on, and almost instantly you have an AI that is far in advance of human intelligence. When something is that smart, it can find ways to accomplish goals, whatever those goals are.
    Anybody who has ever written code knows that all code has bugs, and there are always unintended things that happen, so creating such a vastly powerful thing is bound to have some catastrophic results, no matter how careful we have been. So the results could be very dangerous, indeed!

    But this all starts with a general AI, and so far there has been little success in achieving such a thing. The trouble is, once we have achieved it even a little bit, we quickly have an infinitely powerful AI.

    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

Leave a Reply to MaxCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.