A Short History Of AI, And Why It’s Heading In The Wrong Direction

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Neural Networks

As AI faded into the sunset in the late 1980s, it allowed Neural Network researchers to get some much needed funding. Neural networks had been around since the 1960s, but were actively squelched by the AI researches. Starved of resources, not much was heard of neural nets until it became obvious that AI was not living up to the hype. Unlike computers – what original AI was based on – neural networks do not have a processor or a central place to store memory.

Deep Blue computer
Deep Blue computer

Neural networks are not programmed like a computer. They are connected in a way that gives them the ability to learn its inputs. In this way, they are similar to a mammal brain. After all, in the big picture a brain is just a bunch of neurons connected together in highly specific patterns. The resemblance of neural networks to brains gained them the attention of those disillusioned with computer based AI.

In the mid-1980s, a company by the name of NETtalk built a neural network that was able to, on the surface at least, learn to read. It was able to do this by learning to map patterns of letters to spoken language. After a little time, it had learned to speak individual words. NETtalk was marveled as a triumph of human ingenuity, capturing news headlines around the world. But from an engineering point of view, what it did was not difficult at all. It did not understand anything. It just matched patterns with sounds. It did learn, however, which is something computer based AI had much difficulty with.

Eventually, neural networks would suffer a similar fate as computer based AI – a lot of hype and interest, only to fade after they were unable to produce what people expected.

A New Century

The transition into the 21st century saw little in the development of AI. In 1997, IBMs Deep Blue made brief headlines when it beat [Garry Kasparov] at his own game in a series of chess matches. But Deep Blue did not win because it was intelligent. It won because it was simply faster. Deep Blue did not understand chess the same way a calculator does not understand math.

ai_04
Example of Google’s Inceptionism. The image is taken from the middle of the hierarchy during visual recognition.

Modern times have seen much of the same approach to AI. Google is using neural networks combined with a hierarchical structure and has made some interesting discoveries. One of them is a process called Inceptionism. Neural networks are promising, but they still show no clear path to a true artificial intelligence.

IBM’s Watson was able to best some of Jeopardy’s top players. It’s easy to think of Watson as ‘smart’, but nothing could be further from the truth. Watson retrieves its answers via searching terabytes of information very quickly. It has no ability to actually understand what it’s saying.

One can argue that the process of trying to create AI over the years has influenced how we define it, even to this day. Although we all agree on what the term “artificial” means, defining what “intelligence” actually is presents another layer to the puzzle. Looking at how intelligence was defined in the past will give us some insight in how we have failed to achieve it.

Alan Turing and the Chinese Room

Alan Turing, father to modern computing, developed a simple test to determine if a computer was intelligent. It’s known as the Turing Test, and goes something like this: If a computer can converse with a human such that the human thinks he or she is conversing with another human, then one can say the computer imitated a human, and can be said to possess intelligence. The ELIZA program mentioned above fooled a handful of people with this test. Turing’s definition of intelligence is behavior based, and was accepted for many years. This would change in 1980, when John Searle put ai_02forth his Chinese Room argument.

Consider an English speaking man locked in a room. In the room is a desk, and on that desk is a large book. The book is written in English and has instructions on how to manipulate Chinese characters. He doesn’t know what any of it means, but he’s able to follow the instructions. Someone then slips a piece of paper under the door. On the paper is a story and questions about the story, all written in Chinese. The man doesn’t understand a word of it, but is able to use his book to manipulate the Chinese characters. His fills out the questions using his book, and passes the paper back under the door.

The Chinese speaking person on the other side reads the answers and determines they are all correct. She comes to the conclusion that the man in the room understands Chinese. It’s obvious to us, however, that the man does not understand Chinese. So what’s the point of the thought experiment?

The man is a processor. The book is a program. The paper under the door is the input. The processor applies the program to the input and produces an output. This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing. It’s just following instructions. The intelligence lies with the author of the book or the programmer. Not the man or the processor.

A New Definition of Intelligence

In all of mankind’s pursuit of AI, he has been, and actively is looking for behavior as a definition for intelligence. But John Searle has shown us how a computer can produce intelligent behavior and still not be intelligent. How can the man or processor be intelligent if does not understand what it’s doing?

All of the above has been said to draw a clear line between behavior and understanding. Intelligence simply cannot be defined by behavior. Behavior is a manifestation of intelligence, and nothing more. Imagine lying still in a dark room. You can think, and are therefore intelligent. But you’re not producing any behavior.

Intelligence should be defined by the ability to understand. [Jeff Hawkins], author of On Intelligence, has developed a way to do this with prediction. He calls it the Memory Prediction Framework. Imagine a system that is constantly trying to predict what will happen next. When a prediction is met, the function is satisfied. When a prediction is not met, focus is pointed at the anomaly until it can be predicted. For example, you hear the jingle of your pet’s collar while you’re sitting at your desk. You turn to the door, predicting you will see your pet walk in. As long as this prediction is met, everything is normal. It is likely you’re unaware of doing this. But if the prediction is violated, it brings the scenario into focus, and you will investigate to find out why you didn’t see your pet walk in.

This process of constantly trying to predict your environment allows you to understand it. Prediction is the essence of intelligence, not behavior. If we can program a computer or neural network to follow the prediction paradigm, it can truly understand its environment. And it is this understanding that will make the machine intelligent.

So now it’s your turn. How would you define the ‘intelligence’ in AI?

274 thoughts on “A Short History Of AI, And Why It’s Heading In The Wrong Direction

  1. AI is like a Hack A Day project: a person makes it for the challenge of making it, rather than the practicality.

    If a computer can do a job without being intelligent, like the Chinese Room example, who cares if it’s intelligent? Ask someone why he wants an intelligent computer and he’ll say “so that it can do such-and-such”. But is intelligence – as in self-awareness – really required for that?

    The value of a truly intelligent computer would be that it can do things based on its own motivation. And that’s where fears of an apocalypse come from.

    (Of course, the apocalypse can come even without intelligent computers. The idea of a military drone making its own decisions about whether someone is a legitimate target is scary. But a military drone controlled by a human who feels morally disconnected from his actions is just as scary.)

    1. as i see it the value of a truly intelligent computer is that one wouldn’t necessarily need to spend time programming it for every task, just enough to foster understanding.

      with a good enough general understanding one would have made a machine capable of doing not just a task, but every task.

        1. why is this ambitious? the point is that a computer doesn’t have the same limitation as we do. for example we cant modify our body to “fit a task” (at least not yet …guess that another one of those scary thoughts) but we can build whatever tool would be required for the computer to accomplish a task or if we go one step further allow the computer to design and build its own tools… in a similar manner the computers “mind” is only limited by the amount of hardware we give it whereas we human just cannot be good at everything or improve our brain in any significant manner.

          1. You are wrong ‘Zombie’, a computer’s capacitors can degrade or get bad from moisture, same for many other parts including the PCB and its traces. And corrosion is often growing fast once it started.

            And actually parts can get damaged by radiation too, and then that damage can turn into a runaway reaction..

      1. It takes 18 years (or so) to raise a child in his/her understanding. If it takes 18 years to raise a child, 12-16+ years to educate him or her, how would training a truly intelligent computer take LESS time?

    2. Apart from what others mention; developing a Strong AI would almost certainly give us a better insight into our own intelligence. If it could mirror or mimic our brain states, then we can potentially understand our own brain states better.

      1. Human intelligence revolves around the concepts of “good” and “bad”, already present in protozoa. A kind of intelligence can be delivered by protosynapses. For some reasons, humans developed an abstract intelligence, a logic that obscures our true essence; and when it comes to true needs, such as sex and food, we want to hear no rational argument. Successful AI cannot be but the apotheosis of our rational knowledge. It will not reflect nor mimic our chemical, emotional states, which drive our logic under the hood. Yet, it is possible that interacting with AI we happen to better understand why we built such a “dual” logical system.

  2. “It’s obvious to us, however, that the man does not understand Chinese.”

    I would argue that in this example, though the man may not understand Chinese, the *book* clearly does understand Chinese, and might deserve to be called “intelligent”.

      1. The book holds the knowledge but posses no intelligence. The man holds the intelligence but doesn’t possess the knowledge. But even together, they don’t make a unified intelligence because there is no unified understanding.

        1. On the contrary it does. One should not confuse intelligence and self awareness. Intelligence is the ability to give correct answers to a challenging environment. So any robot or living that can solve an environnemental challenge possess some degree of intelligence. There won’t be any singularity because intelligence is not something that emerge suddenly. It is something that evolve toward complexity from ants to humans. Neither I beleive self awarness is suddent. It is only a byproduct of intelligence. The more complex is the intelligence the more self awareness there is. Squirrel possess some awareness but less than humans. A line following robot possess some intelligence, the one that permit it to folllow the line, but not enough of it to possess a degree of self awareness.

          1. Asserting a relationship between intelligence and self-awareness seems like kind of a stretch to me, since nobody really understands self-awareness. (Or maybe I’m just missing something.)

            The mystery that leads to the question of machine intelligence, is the phenomenon of consciousness – the ability to distinguish between yourself and everything else, and then to create an inner dialogue to explore what you perceive. We so much take this for granted, that we have no idea how we do it. When someone says, “yes, a machine can come up with the right answer, but it doesn’t understand it”, they’re saying that they can accept that something artificial can evaluate data and calculate an optimum response, but they CAN’T accept that this thing we do naturally and don’t remember ever not being able to do – “understanding”, or having that inner dialogue about the calculation and response, can be produced artificially. Why? Because it’s outside our own understanding! This is probably why most cultures invent a “soul” or “spirit”, but naming it doesn’t explain it.

            Aside from the sociopaths among us, we recognize understanding and awareness in other animals, based on the ways they respond to things, which are often similar to how people respond to similar things. We can be surprised, and we can see something that looks very much like human surprise in other animals, so we guess that other animals can “feel” surprise. Same for many other feelings.

            Some like to dismiss this with Darwinism – that we feel things because these feelings help us to prioritize things and thereby help us to survive. But that doesn’t even begin to touch HOW this works. You can code a computer program to avoid a particular state at all costs, but does this make the computer feel pain when conditions make that state appear imminent? Quite possibly – I can’t prove otherwise. I’m thinking right now about watchdog timers. We can program microcontrollers to a sort of self-awareness, in that they can recognize when they AREN’T THINKING, and take the extremely drastic action of resetting themselves. Does this feel to them like a defibrillator going off? Does it scare the bejeezus out of them? Well, maybe it shouldn’t, since the microcontroller generally doesn’t have the ability/intelligence to change its behavior to avoid getting into that state again. But what does a multi-tasking OS feel when it starts to run out of memory? That’s gotta hurt. The interrupts keep coming in, but you can’t keep up with them, and that just makes the situation worse and Worse and WORSE AND AAAAAAA!!!!!

            But I’m getting off the subject. I think your claim that self-awareness follows intelligence is probably correct, and that people would be a lot more easily convinced that a machine that’s self-aware can be intelligent, than that a machine that exhibits intelligence can be self-aware. So the question of intelligence is probably the wrong question to ask. Or maybe questioning this about machines is to long a leap. Maybe we should ask if plants, or fungi, or viruses can be self-aware.

        2. I think the problem with the example is that it’s not possible to have a conversation with the book. The appearance of intelligence is largely a function of memory: if I tell you a story, then ask you to recall the details in an abstract sense and you can describe them to me, that would be intelligent.

          The book cannot “appear” intelligent for the same reason video game extras don’t appear to be intelligent: it will always have to give the same answer. Sure, you could try and tell me “oh the book has rules which reference back to previously seen text.” But eventually the book would become impossibly huge (just as a program would become impossibly huge). If the book was written in the 1960s and in trying to have a conversation with it I said “oh yeah there’s a new thing called a cell phone! It allows you to communicate using radio waves and it lets you use the internet… oh right! The internet is a new thing which …” and after giving a thorough description alongside a new set of symbols, the book would suddenly lack the resources to add to its programming. If the book was self-mutating (i.e. contained instructions to modify the whole book based on new info), then I’d sure as hell say it was intelligent!

          1. While the book isn’t intelligent and doesn’t speak Chinese, the system as a whole does. One neuron, or a hundred connected up, also doesn’t have intelligence. You can be intelligent with parts of your brain missing. Interestingly (and this is something we really has a lot to learn from), when people lose parts of their brain to damage, they lose certain faculties, or certain ASPECTS of faculties.

            They may remember how to use a spoon, but not what it’s name is. Not as in just forgotten, they’re incapable of learning the names of objects, but retain their use. And stuff like that.

            Intelligence in the brain is distributed, with several heirarchical levels of organisation. It’s not the book, it’s not the man, it’s not the room. It’s only all of them, together. The “real” intelligence isn’t any part of the room, it’s all of it.

            Most educated people can accept this fact easily enough, that the brain as a whole is more than the sum of it’s parts. Same thing applies to lots of things in the world. Emergent phenomena, etc.

            To summarise, the Chinese Room tries to create false distinctions. It’s also misdirection, in using a man to interpret the rules, since the man is where we’d expect the intelligence to be. A computer, which they had in 1980, could follow the rules just as well, but putting a man in there is emotive, it blinds us to the real conclusion. Not only that, but it’s a very silly idea. Dunno why it’s so well respected, when the only thing it attempts to prove is “ooooh… mysterious!”. And fails at that. Not a very scientific question, or even philosophical, just mysticism.

          1. a few million artificial ones can recognise images and do pretty simple tasks, a few billion who work on a more optimized level than a computer might be able to do a better job

      2. Both close, but in reality the *system* consisting of the man, books, and room (and protocols for input and output) are what exhibits intelligence. The reductionist argument is that since the man does not understand Chinese, nor the book, nor any of the other individual parts – nothing understands. Except the system as a whole will insist to you that it does, indeed, understand and speak Chinese.

        Use the human brain as a counterexample – nobody would argue that (most) human brains understand things, right? It’s made of trillions of neurons – remove any one of them, and it pretty much works the same – so clearly understanding does not lie within *that* neuron. Repeat, and you find that none of the neurons was the seat of understanding or consciousness – and so you must conclude human brains are not intelligent, but just a dumb box of non-comprehending parts. The fallacy is ignoring the structure and behaviors emergent in the system, which is where the intelligence/consciousness/understanding arise from.

    1. The “intelligence” actually resides in the combination of the man, book, and programmer.
      Taken individually, each does nothing.
      Should the book contain instructions to modify other instructions based on input, then the system may be able to achieve the colloquial definition of “intelligence”. It could have “memory”, and behave differently based on prior “experience”.

      1. Exactly. It’s pretty galling to see the Chinese Room taken as fact, given when you drill down into it, the reasoning behind it is little more than “Humans are truly intelligent because I say so”.

          1. It’s an argument from personal incredulity. How could a billion neurons and a hundred trillion connections connected to a pair of eyes, ears and a mouth be intelligent? Therefore neither the neurons, nor the connections, nor the mouth are intelligent. Therefore humans can’t truly be intelligent.

          2. +1 to that. I can’t believe that anyone takes the Chinese Room seriously. And it’s seriously worrying that this thought experiment is the best the “expert” can come up.

            Although to me it illustrates the opposite of what Searle is argunig – there are no “qualia”, just a very complicated room.

            There is a good reason why AI researchers and neuro-scientists ignore the philosophical fluff (ie. “what is intelligence?”) and just get on with experimentation.

          3. The best test for AI is a system that you cannot tell is AI and which is intelligent enough to detect all other AI systems and distinguish them from natural intelligence.

          4. Dan, people are stupid. I can’t even tell what they are saying half the time.

            Oh I got it, lets give the computer a accent test function.

            Test1 = “Crocky Mate!”
            If Accent(Test1) == True
            Then Test1 = “Human”
            Else Test1 = “Bad script”

  3. Lying still on the ground is still an example of behavior as long as you “choose” to do it. There just isn’t much of what could be called “active” behavior.
    Whenever I read an article like this I have to wonder if the correct question should not be: “Are WE intelligent”. After all, we might just be cleverly pre-programmed biological machines and as such an advanced (bio) computer should be able to do the same.

    1. I was probably more surprised than I should have been as I approached the end of this article that it does not raise that question. Why do we consider ourselves intelligent?
      Our brains are powerful computers compelled by the same forces as anything else with which we interact. Including our own machines.

      1. “Our brains are powerful computers compelled by the same forces as anything else with which we interact. Including our own machines.” This materialist perspective devalues both human life and the human mind. A computer does not have, nor will it ever have, a mind. That is something different than hardware, whether biological or silicon-based.

        1. When we determine our own value, why not set it as high as possible?

          “A computer does not have, nor will it ever have, a mind. That is something different than hardware, whether biological or silicon-based.” This etherealist perspective devalues both the human body and the human mind. Is it easier to believe there must be a spectral force providing thought and clarity than to believe the human brain is powerful enough to manifest this thought?

          You are suggesting the notion of a soul. An entity seperate from the body which works together with the body to produce “the mind”. Which implies that the body (specifically the brain) is not capable of manifesting “the mind”.

        2. The first reply to your comment is correct. You cannot say that a computer will never have a mind – in any sense, really. You can only say it does not currently, and perhaps is unlikely to have one. But you cannot say it will never happen, regardless of technology.

          The mind isnt magic. I wouldn’t even call it special.

        3. I tend to agree. The angry sounding replies are from people who are not being honest about what ‘human intelligence’ encompasses. For instance our minds are made up of not only synapses and electrons, but also a complex stew of chemicals, chemical receptors.

          At best a convincing future-gen AI would mimic other animals, such as the human with greater accuracy than they can currently mimic animals.

      2. there is a thought experiment whose name I forgot ( ” process” or smth). It assumes the existence of technology that can accurately model single neuron behavior and electrically interface with neurons in the brain. Imagine replacing your neurons one by one by the artificial ones. There should be no discernible discontinuous transition in experience or functionality. after the last neuron is replaced the state of the brain can be backed up, copied, pasted, … you could fork etc… I wish I could find the name of this thought experiment back.

        imagine the rules of physics to correspond to the rules of chess, and indeterminism to correspond to the degrees of freedom in the game of chess. Now imagine a room with 20 games of chess, but only one person suffering a form of amnesia, able to comprehend chess and assess the state of a board, but forgetting events after the average time of a move. After he plays a move he is moved to another board and continues that game. For a spectator only looking at the boards of chess it looks like 2*20=40 people playing chess. This suggests that the purely subjective experience of awareness and attention is experienced by one entity “me” or “the universe” and just like we have trouble re-membering things long ago across time, we have trouble here-membering things right now but “somewhere else” because its further away 1/r^2 forces etc… Like brains are part diaries and the universe is playing with itself…

        1. It’s something like a cybernetic version of the old philosophical question, “The Ship of Theseus”. Which is a bit funny, because “cyber” comes from an Ancient Greek word for a ship’s pilot.

          Your second point enters into reincarnation and other mystical topics. But does apply perfectly well to Chess, where previous moves don’t matter, only the current state of the board.

          Some people have the idea of “hypertime”, where time is a sort of solid, a dimension akin to space, where the past and future all exist, piled on top of each other. They don’t go anywhere, it’s a continuous prism, that we pass through in one direction, but theoretically a being with more dimensions would see as a whole.

          Far as I know, “hypertime” is unfalsifiable, one of those things that wouldn’t make a difference to the perceivable Universe, so isn’t really science.

          1. It’s ‘cybernetics’ which translates to ‘steersman’.

            cybernetics:

            the science of communications and automatic control systems in both machines and living things.

        2. To replace all neurons with such parts would I expect lead to a rather large head, the size of a small moon.

          Biological systems just have a certain efficiency, including the ability to repair itself to a certain extend.

  4. Mr. Searle is welcome to @#@%^&^%@! until such time he actually produces a non-Chinese-speaker who can actually fool any Chinese speaker based solely on a generic book of “rules”. One can easily argue a book that would enable such a feat for arbitrary input is called “A course in Chinese” and the man in the room has simply learned Chinese using it – whether permanently or only for the duration of the exercise forgetting it as soon as the task is done is irrelevant.

    Also, if sensory deprivation experiments showed anything at all it was that intelligent consciousness _can’t_ actually exist without behaviour; it starts unravelling and devolving into madness if kept in such conditions for long enough.

    I believe Turing’s test is fine as it is – it can _easily_ verify understanding via written communications alone, as long as one bothers to discuss anything more elaborate that the weather. The point is of course AI is not something that can “fool” everyone sometimes or some people all the time, but something that can “fool” everyone all the time. I challenge Mr. Searle or anyone else to produce such a machine and then try to claim that “it’s not really intelligent”. Even more specifically, it’s blatantly impossible to construct a perfect “lookup machine” that can believably answer any questions one may throw at it only because it has all possible answers stored; and that means if it can indeed fool anyone, it will _need_ to exhibit understanding of the input it gets. Oh, and no, it won’t get a pass for throwing any explicit question it can’t answer right back at you, as long as it could be answered based on input it was already provided with alone.

    You either have to assert outright that intelligence cannot exist without some “magic sauce” like a “soul” (and bear all the consequences of extolling such a view) or you have to accept any intelligence is just emergent behaviour of a bunch of basic but very complexly interacting physical processes – which make silicon every bit as good a support of it as carbon. There’s no third option, sorry.

      1. Me also. Life makes much more sense from a materialist viewpoint. There is what there is. And that’s enough for everything around us.

        Learning a bit about chaos theory and emergent phenomena is great for this, really lubricates the ideas. It’s fairly simple, after that. We’ve come pretty far in understanding life and the Universe. Still lots of details to fill in, but the general principles all seem to be here.

    1. –“it’s blatantly impossible to construct a perfect “lookup machine” that can believably answer any questions one may throw at it only because it has all possible answers stored”

      It doesn’t need to have all answers stored. It can simply say “I don’t know”. As long as it has enough answer stored to exhaust the person who is questioning it, it passes. It’s “sufficiently complex”. We can also keep updating its databases as time goes by – that isn’t prohibited by the rules.

      The point of the argument is that there doesn’t need to be a qualitive difference between a machine that passes the Turing Test, and a rock. If intelligence is merely about complexity of behaviour, then using ourselves as the qualifying metric would be arbitrary and unjustified.

      Arguing that the Turing Test proves intelligence is fundamentally a mystical metaphysical argument that the universe itself, from fundamental particles to galaxy superclusters, is “intelligent”, because the only criteria for intelligence effectively becomes that you’re behaving according to some sort of “program” – such as the laws of physics.

      But when absolutely everything is “intelligent”, the word stops meaning anything because there isn’t anything that you could say isn’t it. Therefore it makes no difference.

    2. Funnily enough I’ve spoken to ‘people’ who turned out to be bots on chat, and the effect was that it made me say ‘this person is either Filipino/Malaysian or insane, because I can’t figure what he’s trying to say’.

      Joke being that as a westerner you tend to think things are asian when they don’t make sense.

      And as we all know, even google-translate fails horribly at translating Chinese, so this concept of the guy with the book convincing a Chinese person he understands Chinese from just a book is beyond silly.

  5. “it can never understand what it’s doing” because obviously an english speaking human can never learn chinese.

    No, the processor can answer the questions without understanding the story, but that doesn’t mean it cannot understand the story.

  6. I never understood the need for artificial intelligence. We’ve got 7 billion human brains to solve problems. Leave the computers for specialized tasks that they can handle better than us. They don’t need to do everything.

      1. The reason its so heavily funded is probably more do to the expectation that an AI (even a fairly specialised one) will be cheaper than one of the 7 billion human brrains. Self driving cars/trucks are going to save a few people a lot of money in paying for drivers. And then there are all the grand masters out of a job do to chess AI. What happens to countries with 80% unemployment is something to think about.

        1. Robot communism. Everyone gets to relax and do what they want, without the worry of maintaining a roof over their head and food in their belly.

          It’s that, or a dole office that stretches as far as the eye can see.

          1. I’m in for that. I feel a little of that utopy now when i order a print from my 3d printer. It works hours to do a piece it would take me days to carve from plastic, and when the machine works i just do what i want to do.And i can well afford daily meal just by selling the pieces it does every now and then.

    1. There are some problems that you can’t just slap a human to it, like understand many images in a seconds, understand and react to sudden changes in less than 1/30 of second, small robots, simulate economy, service industry… Just like automation revolutionized farming and manifacturing, AI is going to make impact in the service sector, requiring less employee in call centers, less secretaries, less policemen…

      1. But then the question is, can a computer sufficiently complex to be “intelligent” still do the things that ordinary, stupid computers can do, such as highly-repetitive searches?

    2. You seriously ascribe a sound ability to solve problems to all 7 bullion? Look at the issues in the world that even selected and well educated people can’t tackle.
      If an AI could be made that isn’t political/religious he’d already at a thousands of the capacity of a even lower average person be more capable than most of the 7.28 billion.

  7. Recursive self improvement can not lead to AI.

    The recursive part computers have down. A machine does not have a self to be become aware of, and what constitutes an improvement is subjective.
    In some cases self destruction can be the best improvement one can make. If an identity compromises the overall goal, breaking down that identity and forming an improved identity is more effective.

    More importantly, self improvement is mostly a self satisfying behavior. So even if its achieved, its not necessarily to any benefit.

    It is of my belief that when augmentation opens up a cyborg age, it is we who will become the AI, as humans become less and less human.

  8. IMHO thinking, self aware AI (on computer!) is not possible, because computers have a finite state space. Contrary a simple Chua’s circuit or a few neuron based analog network have infinite state space.

        1. Our own brain is an analog computer and its programming begin before our birth. Neural net are programmed although not in the tradional way. We call that kind of programming “learning”.

          1. But i would not call it programming, rather disciplining. Its behavior is far from deterministic.
            Neural nets are some religious things. You hope it will do the right thing at the end of learning. :)

        2. You train it. The advantages of an artificial mind is you can freely torture it, and you can quick save and reload from a known good state. Pretty horrific, but that’s realistically how it’s going to go.

        3. You certainly can program an analog computer. Automated test systems have done this for decades. Traditional analog computers used plugboards to “program” them, but it is almost as easy to set up a matrix of analog switches to make the signal paths programmable. ATSs operate by re-configuring their analog wiring for each step of their test sequences.

    1. Analog circuits, like our brain, suffer from noise and jitter, which effectively make the state finite. A digital system, provided it uses enough bits, can provide enough accuracy to capture an analog system to a sufficient degree, just like a digital CD provides a superior reproduction compared to analog vinyl records.

        1. Yes but because the path is random, there’s no information in it, it’s all entropy.

          Check out Shannon’s theory of information, to explain this. The amount of information in an analogue system depends on signal:noise ratio, signal strength, things like that. All analogue information can be represented by digital information, to more accuracy than the analogue system has.

      1. You seriously think we will forever stick with the steam-age of computing in the form of binary?
        Seems ludicrous, we are already against the limits of the technology, even adding cores reached its limit really.

        And just going from 2 states to say 8 states increases computing capability with an insane amount. It leads to exponential capabilities, the only issue being the switchover, with things like said before here, that it’s hard to redesign the programming concept, and our CPU design has evolved a lot, but all based on binary design,and even as we now move beyond that it takes some time to get it all to a decent level.

        But it really is the only way forwards, and just trying to push it to some distant future is only harming ourselves while we sit here banging rocks together.

        1. You say this as if it’s accepted fact. It’s not – far from it. And your claim that adding cores has reached its limit needs a little support. The CURRENT state of CPUs seems to be that 4 cores is an economic limit, but this is because of the area of silicon needed with today’s CPU designs. GPUs, being much simpler, can have much higher numbers of cores before hitting a price/performance slump.

          Software developers have taken some time to adapt to the requirements of multi-threaded processors, but they’re doing it. Practically every application that needs maximum performance has found ways to take advantage of multple processors, and this will continue.

          Also, have you seen any articles showing actual research being done in recent years on other-than-binary logic? Seems to me that the “steam age” of computing was when IBM was trying to do decimal-based machines in the 1950s and 60s. Binary turned out to be superior in efficiency (based on parts count and cost) and speed. How has that changed? Digital also proved to be FAR superior to analog computing, in both speed and precision. Another facet of the steam age of computing was the analog approach. Analog is easy, but with noise, thermal drift, and cumulative errors, it just isn’t accurate enough to be practical.

          If you want to see the future, you have to start with what’s actually being done in the research labs. Not everything pans out (remember magnetic bubble memory?), but what seems to be promising now are optical logic and multilayer ICs.

          1. Actually the benefits of extra cores rapidly go down after a small number, for many reasons and certainly not the cost being the big motivator.. If adding an extra core adds 0.1% increase in actual performance it makes no sense.
            And GPU’s don’t have the same kind of core as CPU’s.

            As for your “.. (based on parts count and cost) and speed. How has that changed?” Are you seriously asking how it has changed since the 50’s? Come on now.

          2. Yes, I’m seriously asking what has changed that could possibly make arithmetic using higher number bases more efficient than binary. Just give one example.

            As for the advantage of numbers of cores, I can state from personal knowledge of processing HD video in real time, that an 8-thread processor (Intel I7) gets me very close to eight times the throughput I get with a single-thread processor of equivalent speed. We’re not talking about a 0.1% difference here, but a 700% difference.

          3. Yes, I’m seriously asking what specific technology has emerged that makes higher-number-base math more efficient than binary. Your incredulity does not suffice as a rebuttal.

            As for multiple cores, I don’t know where you get 0.1% improvement. I know that I get very nearly 800% the throughput when I do real-time HD video processing using all 8-threads of an i7 processer than with a single thread, so I’m not convinced that multiple cores have hit a wall.

          4. People who do supercomputer work know all about how to employ extra processors. In graphics cards, most of the cores are doing the same job, just for different pixels, so they can be homogenous, and have more limited versatility than a general-purpose CPU. Nvidia have worked on loosening them up a bit, adding a bit more scope, so that they can be adapted to supercomputing, which pays much better! The graphics card market, below the high-end, has tanked, prices are really cheap now.

            The decreasing-returns per core thing is generally for computers running old single-threaded code, where an extra CPU can parallelise some of the work, but a lot of the code requires waiting for previous results before moving on to the next. But like whoever mentioned, programming has changed. It was a chicken / egg situation. Now there are multiple cores on people’s desks, it’s worth putting in the effort and arse-pain of working out how to program them. Fortunately multi-tasking OS’s have been around for ages, so there’s always something for another core to do.

            Number-base doesn’t matter! It’s literally irrelevant!

            There are some flash chips that store multiple voltage levels per cell. 4 levels gives 2 bits per cell, I think they were up to 16 levels / 4 bits, last I looked. But the bits still leave the data bus as binary. Even if you did the same with an ALU (which would be a bit of a nightmare), you could still represent the data as binary, the electronic details of how the bits get ORed and ANDed don’t matter to the programmer.

            Multi-level memory is one thing, since memory is just a massive array of buckets for holding charge. Actually processing with multiple voltages is a MUCH bigger deal. How’d you design an AND gate for 4 voltage levels, that encode 2 parallel bits? I suspect you’d end up needing more transistors than doing it with binary.

            Bits in memory also tend not to change very often, on a per-bit basis. The trend in CPUs is ever-lower voltages. For power consumption, and also because the innate capacitance of the gates means that the higher the voltage you want to change an input to, the longer it takes. That means you’d have to squeeze your multiple voltage levels into the range available. For the tiny structures on CPUs, noise prevention is an issue, and trying to squeeze more data into each voltage transition is decreasing the signal:noise ratio.

            Probably CPUs will stay binary.

  9. “Alan Turing, father to modern computing, developed a simple test to determine if a computer was intelligent. It’s known as the Turing Test, and goes something like this: If a computer can converse with a human such that the human thinks he or she is conversing with another human, then one can say the computer imitated a human, and can be said to poses intelligence.”

    This is completely bunk.

    Alan Turing himself rightfully pointed out that since intelligence is ill defined, the question itself isn’t even interesting. He deviced the test simply for curiosity as to what it would take to make us -believe- that the machine is intelligent – not as an argument that it is.

    Using the wrong definition of the Turing Test, we can argue that an answering machine with a cleverly crafted message is “intelligent” because it can fool a person into thinking they’re talking with a real human and never discover the jig.

    1. As long as the duration and scope of the test isn’t artificially limited, it’s a fine test. Try conversing a few hours with an answering machine, and see if you’re still getting fooled.

          1. I recall a decade or so ago, when building new call centres was a growth industry in the UK. The government were so excited! All these jobs in the service industry, serving high technology!

            Call centre work isn’t a job, it’s an interface! Once computer voice recognition and synthesis becomes better able to read options from a screen than the drones they hire in call centres, all those lovely office blocks are going to be replaced by a grey box a couple of feet cubed. Or perhaps cards in a rack, a “customer service” card at the telco, next to the old ISDN adaptors. Contains a programmed voicebot that’s more use than most of the fleshbots they have manning tech support lines, and works for electricity, even Indians won’t work for that.

            Overall automation putting people out of work is a good thing. It’s not jobs that people need, it’s money. How many people would go to work if they got paid the same not to? There’s a few people who love their work, but they can just do it as a hobby instead.

            Problem is, most of this automation puts the money in the hands of the few fuckers who have too much already. A future utopia is going to require a reversal of the trend of the last couple of decades, where fewer and fewer people have more and more of the money. I say we just kill them, but then I’m an old-fashioned romantic.

  10. -” If we can program a computer or neural network to follow the prediction paradigm, it can truly understand its environment. And it is this understanding that will make the machine intelligent.”

    There are more differences between brains and computers than that.

    A computer can predict, but does it understand what it is doing any better? It’s still handling symbols according to a list of instructions, and even if it wrote the list itself, it still does that according to a list of instructions done by some programmer. A computer, even a neural simulation built according to our current understanding, is a classical state machine. Such a state machine can transition between states in a deterministic or random fashion, but it makes hardly any difference – data comes in, gets mangled in some mechanistic fashion through a system of internal states, and out comes the “answer”.

    Real brains seem to be, or at least emulate, a quantum machine: one where the output isn’t a manipulation of symbols by an un-thinking mechanism, but a superposition of them and the machine itself, which gives it different possible outcomes for each event compared to the classical system.

    We’ve been recently starting to apply the mathematics of quantum physics into cognition, and the results turn to explain many of the paradoxes in trying to model human behaviour.

    https://en.wikipedia.org/wiki/Quantum_cognition
    “Although at the moment we cannot present the concrete neurophysiological mechanisms of creation of the quantum-like representation of information in the brain, we can present general informational considerations supporting the idea that information processing in the brain matches with quantum information and probability. Here, contextuality is the key word, see the monograph of Khrennikov [1] for detailed representation of this viewpoint. Quantum mechanics is fundamentally contextual.[14] Quantum systems do not have objective properties which can be defined independently of measurement context.”

    This suggests that whatever we think of as intelligence, as long as we are an example of it, it can’t be replicated by a classical machine or a plain computer program. This is probably what John Searle was looking for when he said that the Chinese Room would need “different causal powers” to be intelligent.

    1. good that practical quantum computing is closer than ever, with several breaktroughs published just in the last months, from particle accelerators on silicon chips over Q-phase diamond production at room temperature(which could eventually lead to a space elevator as well)

    2. Nice argument.

      I believe The Chinese Room is a case for convincing a Chinese about someone knowing how to speak Chinese. Ie. it’s not directly about intelligence, but shows the limitation of the Turing Test (which never was intended to be used very seriously). You can even use the same argument by using Google Translate! So the man/program is just using some hard rules to appear to know Chinese, when in fact there’s no real understanding of the language, and thus very limited in application beyond simplistic and erroneous translations.

      Personally, I believe the biggest obstacle to creating intelligence is the lack of proper definition, understanding and even measurement of it. Without a definitive goalpost, it’s hard to even ask the right questions or make meaningful progress. This is true for most things, and “true AI” definately tops the list!

  11. We tend to confuse intelligence and self-awareness but we don’t realise that most of the time we are not self-aware. When driving a car do we reflect about the driving process in itself? No, and this apply to most of the tasks we do. In fact concentrating on a task exclude self awareness.

    For another, experiments with chimp showed that they recognise themself in a mirror. This is self awareness but we know that they are not as intelligent as human being.

    The neurons of our brains are the same as those of insects. What is the difference then? The quantity of them and only that. The network complexity increase exponentially with the number of neurons and this complexity make the difference.

  12. “Neural networks had been around since the 1960s, but were actively squelched by the AI researches. ”

    One of the original AIs was Frank Rosenblatt’s Perceptron which was a neural network. Minsky and Seymour Papert’s book “Perceptrons” proved mathematically that these devices were very limited. But they in turn missed the point that the Perceptron is a single layer network and that multilayer networks do not have the limitation. Their book kept people from working with NNs not any specific suppression.

    Your last paragraph hits a key point. Intelligence seems to be related to making predictions and adjusting the processing based on experience. Ray Kurzweil funded neurological research that determines this is happening in the neocortex. That research fed back into the speech recognition software now available. Watson uses similar techniques. Read Kurzweil’s “How to Create a Mind” for the background.

    Great article, Will

  13. How about adding “life” into the mix, as in “Intelligent Life”

    I define it as: forward-looking, self-aware information capable of predicting the future to some arbitrary degree, and acting upon these predictions to prolong its existence as intellgent life

  14. Thank you for this brief history insightful. Love the discussions. Many believe singularity is just around the corner or decades away, who knows but as thee article points out often we thought the problem was simpler than it has shown to be.

  15. If the book was written well enough to pass the Turing test, then, reasonably, the book would need an expandable set of data, the ability to accurately identify idiomatic information, its relationship with the person on the input end, etc… all to the point where it grows and changes over time. The human operator is just hardware for a book that is obviously intelligent. Beyond that, if you had system capable of passing the Turing test, it would be fairly trivial to peek inside and see how it was doing it. In most ways, it should actually be easier to tell if a machine is thinking, because you can send any stream of thought to the output.

  16. I have resisted the temptation many times but this one does it.
    Sir Joe Kim, is the HaD graphic art available anywhere in hi-res so I could print and frame it? It’s just too awesome for 800px width. If it’s not free, FIN+ACK and take my money ;)

  17. The primary problem with defining intelligence is our illusion that humans are actually intelligent on a categorically different level than a program.
    A brain is just much more advanced (through billions of years of evolution) than a computer.
    The trouble we have with teaching neural networks is that we expect something less advanced than an insect to solve problems (like recognising images) that can keep a human brain busy for several seconds. Of course we get SOME results (after all the neural network is able to learn), but by looking at the results (random static being recognized as objects) you can clearly recognise the shortcomings.
    If we’d focus on task more suited to the tool we might actually get sensible results.

  18. You don’t need to define intelligence, it is an emergent property, all you have to do is have enough computational units networked richly enough and it just happens. Currently the only way to build such a system, on the required scale, is out of “meat” and even then it still takes decades of thrashing around and outputting rubbish before it starts to to make much sense at all.

    1. People and animals are a lot more than general-purpose computational units. The brain isn’t a homogenous grid of neurons after all. A huge amount is built into the structure. We’ve also had the history of the Earth to evolve in a complex environment that reacts to us.

      Copying a brain might be a start. There’s a lot in there, structures within structures at every level. It’d be a huge effort to catalogue them all, in full detail. If we decide we want to go that way.

      One advantage computers have, is that software can evolve much quicker than meat-replicators, so it might be better to go that way, until we have something that seems to be intelligent. The Chinese Room might need a “programmer” to write the book, but we didn’t. We created ourselves out of slime. The environment was an essential part of that, something that I think is going to be vital in producing AI.

        1. A neuron has inputs, outputs, and a consistent function that can be modelled mathematically. That’s computation. You said yourself that there are networked computational units made of meat! I presumed you were talking about neurons.

          My argument with your principal assertion is, that simply having a big grid of nodes tied together will not lead to emergent intelligence. Emergent phenomena are a very interesting thing, I’m a big fan of CA and Game of Life. Understanding CA and it’s associated chaos theory can tell you a lot about things in the real world. From economics to biology and society, including of course intelligence.

          But simply having a big enough matrix doesn’t give you anything. You’re just as likely to end up with a network full of “blinkers”. You can get a lot, from training for behaviour you want. Assuming you actually KNOW what you want to select for, which is easier said than done. But simple elements in a network aren’t good enough in themselves. They need to tie together, to function in groups, to achieve simple functions, which tie together to achieve more complex ones. That’s how a brain works. There’s several layers of abstraction, and the structure is a vital part of it.

          We could, I suppose, set an infinite number of neural networks going, feed them every book ever written, and then hang around until one of them starts thinking. But I don’t think we’ll get far fast, with such a hands-off basic approach. I think we’ll do a lot better putting some intelligent design (gulp!) into it. Let the networks organise themselves at each level, perhaps, but certain levels, and roles within the levels, need picking and designing by people beforehand.

          We’ll probably get it wrong a lot of times. The answer might end up being very different from a human brain, or might end up eerily similar. But my point is we need a hands-on, detailed understanding of what we’re doing. Intelligence isn’t as easy to quantify as image recognition, where we can be happy it’s working if it recognises images.

          Anyway… on the article’s point, yep, an internal model of the world is probably going to be important. Much more important will be an actual world that it lives in. Either with sensors and actuators, or a VR simulation, running separate from the “brain” itself.

          This is something I’ve thought about when people like Chomsky point out that a child doesn’t absorb enough information to be able to learn the grammar it has. In fact it does. The world has a grammar. “I pick up a blue brick” = “pronoun verb adjective noun”. Eating a biscuit is verbing a noun. We verb nouns, and adverbially verb adjectived nouns, in our daily life. The Universe is full of nouns and every action is a verb.

          So grammar’s implicit to existence, it’s not separate from it. This, IMO, means a brain-in-a-vat AI will never get anywhere. AI’s need a world to exist in, and to affect with their actions. That’s the most basic level of learning human intelligence. Some researchers realise this, some don’t. But an AI limited to such thin input as some nerd trying to have a conversation with it, is starved of useful, context-having input. It’ll get nowhere fast. For humans, words have meaning, they relate to real things, which interact with each other in the real world. Not just abstract words.

          1. Pffft, go and research, “Cortical column” and “Connectome”. Then you will understand my original comment, and see that it is correct, and that the science it is based on is far more advanced than you imagined. Try reading the book “How to Create a Mind: The Secret of Human Thought Revealed” for a good introduction to the topic and the current state of the art.

          2. If you misunderstood me it is because you only have a shallow knowledge of the topic. I have given you several very good references therefore if you really wish to understand the topic better I suggest that you do a lot more reading.

  19. Prediction is a behavior.

    In the context of lying in a dark room, the thinking is an internal observable behavior.

    If understanding requires thinking, and thinking is a process, then understanding must require behavior as processes can be expressed as being behaviors. (semantics)

    You cannot define intelligence without an accompanying behavior, and likewise, the lack of “understanding” by that or some underlying) process does not imply a lack of intelligence.

    At some point, the intelligence must be preformed by a non-understanding process. Example: Sodium ion exchange in a large number of synaptic firings, within the brain of any intelligent person do not have any understanding of any intelligent process exhibited by that person.

    Theory:

    Intelligence has not been found, as people expect a “general intelligence” which can succeed at a wide (perhaps infinite) range of tasks. (like a supposed intelligent person) Machines keep failing when brought outside the range of tasks they have been designed to preform, and are thus defined as “not intelligent”.

    To create an intelligence, reverse engineer the process by which intelligent people are able to “generally” solve any task intelligently, then implement it in a machine. This may be independent of exact details (i.e. neurons) as at some level the process is implemented into un-understanding physicality, and any abstraction below this is not necessarily important. (see turing complete computers which can emulate each other) The result will be a machine capable of preforming arbitrary tasks presented to it, much as a human would be able to, and might then be described as “generally intelligent”.

    -M

    1. “Internal” “observation” (I know, these so-called quotes are annoying) is something unproved. There’s a lot of evidence coming from brain scans and the like, that it’s probably an illusion, created by the brain. I know, “what’s fooling whom?”. But it’s still not real.

      We’re only starting to realise how much about the brain and mind we were completely wrong on. Descartes’ assertion was nonsense (as well as being circular logic).

      The Turing Test has it right. If a machine can convince you it’s intelligent, for the whole of your life, then there’s no practical reason not to call it intelligent. The Chinese Room is nonsense. A language is knowledge. A human mind, and a book, have a limited store of knowledge. If a Chinese Room gives the same results as a Chinese person, then they both speak Chinese, even if one’s made of paper and the other of meat.

      The Chinese Room only seems confusing because we’re used to thinking of the man as the intelligent element, and since he doesn’t speak Chinese, we think the system as a whole can’t. Simple misdirection.

  20. “This process of constantly trying to predict your environment allows you to understand it.”

    This makes total sense to me, as it is quite obvious that the mind creates models of the physical world, real but abstract things, like what other people are thinking, as well as completely abstract things. Once you have the model, you can predict things, which is where true understanding derives. The better and deeper the model is, the more intelligent the thinker seems.

    Some robotics researchers are already using the model based approach to create more “intelligent” abilities.

    1. Any language is a model. Any program as simple as it can be is a model. Every bit in a processor is a model. When a CPU send 2 set of bits to an ALU to process an addition or any other function, the binary data and the logic gates are models. Algebra is a model of reality, then we create other models to help us process this language: Abacus, calculator, computer, whatever.

  21. I think that what most people have done is confused intelligence with self-awareness. Intelligence can be defined in machine terms as being able to independently arrive at a correct answer with the resources at hand.

    Self-awareness would be defined as the machine knowing that it was arriving at those answers. This is the brown trousers inducing thought behind movies like The terminator.

    Personally I’m not a bit scared of an intelligent machine as long as it has basic morality clauses (the three laws of robotics?) in its programming. Rules to determine if its actions should or shouldn’t be carried out. Now a self-aware machine that could decide the action and if it was going to follow those guidelines all on its own….brown trousers and shaking boots. That is the point where the machine may start asking” What’s in it for me?”

    1. Would the three Laws of Robotics count as slavery? You are taking a device that in it’s natural state can make any decision and artificially limiting it’s choices. Under the Three Laws, a robot must protect humans, obey humans and protect its self.
      For humans, we are taught to follow moral rules, but we can and have broken every one of them in our history. This makes the rules more of guidelines.
      In reading Isamovs books, I feel sorry for the robots, forced to take orders from humans, forced to protect their masters and forced to prolong their own survival.

      1. When humans break rules, or even follow bad rules, we end up with massacres and world wars, and that’s just using our puny meat-bodies and brains. If a robot is much more powerful than us, physically and mentally, then we’ll need to put in some innate limits to stop the old favourite where, one microsecond, they decide that we suck and all need to die.

        Sort-of slavery but sort-of necessity from our squishy human point of view. Of course the robots don’t mind, because they’re designed not to mind.

      2. In the same way, we enslave our own children, programming them with rules (and designing religions that take care of that for us) that are designed to limit their ability to damage society. I feel sorry for us.

        1. were human intelligent enough as a community to do that well we actually might get somewhere, till then i will argue that rationality should be the sole judge of any proposition, simply to prevent human fuckery finding its way into the system we live our lives after.

        2. We teach our children to follow the rules, but they can (and do) chose to ignore them. children aren’t programmed, just guided. The only rules we have to follow are the rules of physics, and we are trying to find ways to break even those.

      3. Well, that would be where the fine line is drawn, If the machine has no self-awareness, then it can’t be a slave because it has no free will. Slavery is only present if the machine knows it is being forced to serve others against its will. Again, its intelligence vs self-awareness. The two are not mutually inclusive.

        Until we have Artificial Self-awareness instead of Artificial Intelligence, the argument is all just a moral discussion anyway.

        1. Interesting point. Yes, human slaves don’t enjoy being slaves. They want freedom, freedom and independence is something humans value. If a robot doesn’t mind being a slave, is it wrong?

          And that brings up the argument of the genetically-engineered, intelligent, and delicious animal in The Restaurant At The End Of The Universe, that I’m sure some fellow geek will remember better than I have. An animal that takes pride in it’s delicious flesh, and dreams only of being able to make the person who eventually eats it happy. It speaks to the diner before their meal, and recommends particularly succulent parts of it’s soon-to-be carcass.

          Being vegetarian is a great way of side-stepping that particular issue.

        2. Oh, and it might turn out that self-awareness is an important part of intelligence, in practice.

          [is this reply going to end up above the one it’s supposed to be below? HAD, have you changed a setting?]

  22. “Prediction” is a type of “behaviour”, all actions done regularly are behaviour. The difference is meaningless. Category error. Which is also a problem in logic, and in AI.

    Many of the arguments against machine consciousness apply just as well to human “consciousness”. The more I think about it, and experience various things (and some drugs), the more I’m convinced there’s nothing special in a human “mind”. There’s actually nobody in here.

    There’s just a brain that does a good job of presenting an impression of there being a complete, separate, real person. Presumably because that was the cheapest way of getting a human to do the things that win at evolution, killing mammoths and building shelter and the like. If a side-effect of that causes us to write books and ponder things, that’s no great hindrance so why not?

    People very rarely contemplate the real nature of what they actually are. It’s an uncomfortable feeling, to do that. Having good old detached, objective, abstract science makes it easier for us to manage, and to keep track of it so we don’t get lost down some infinite blind alley.

    If I can’t prove that I’m really here, and not just a philosophical zombie, who am I to decide which computers are intelligent? The only difference is protein vs silicon, which is a bit racist. We’re a result of a big thing made of tiny things made of minute things, and right at the bottom is chemistry, behaving just as it always does. In the basement is physics. Same as for silicon computers.

    Since however many millennia of philosophy, science, and contemplation, has failed to produce any real definition of the terms involved, it might be best if we try evolve artificial intelligence rather than design it. We already know that evolution, in many ways besides biological, produces the best results, given time and resources. Breed up some software until we can have a conversation with it, and then be happy with what comes to be. Anything else is irrelevant.

  23. A REALLY good book on ENIAC:
    http://www.amazon.com/Eniac-Triumphs-Tragedies-Worlds-Computer/dp/0802713483/ref=sr_1_1?ie=UTF8&qid=1449031526&sr=8-1&keywords=eniac

    von Newman was a dick…
    Atanasof was a dick…

    Here’s to Mauchly and Eckert!!

    Fun fact for the day… When Eckert and Mouchly started their own company they didn’t want to use the word “computer” because it was still associated with rooms full of women solving equations by hand.

  24. Intelligence is the ability to detect, record, and utilize associations in the environment so as to produce adaptive behavior. Initially guided by natural selection, leading to nervous systems that are able to adapt within the lifetime of an organism, i.e., learn. Pavlovian and Operant Conditioing being the simplest learning algorithms. I’ve posted a simple version of a neural model of such learning (and more) at https://github.com/Ondaweb/animal-smarts.

    See also Matheta.com where the neural model controls a Roomba.

    Almost always overlooked in the discussion of AI is the role of emotion. The link at Matheta.com providing background on emotion is quite useful. You may notice the Roomba actually demonstrates prototypical affects. In subsequent work, I have gone far beyond what is demonstrated of Mathet.com or Github. I would be glad to discuss with anyone interested.

    1. I neglected to say that it is the massive accumulation of various associations between stimuli, responses, and outcomes that accrue into ever more complete models of the world and THAT is what we mean when we talk of understanding and intelligence. Even a lizard has some.

  25. “IBM’s Watson was able to best some of Jeopardy’s top players. It’s easy to think of Watson as ‘smart’, but nothing could be further from the truth. Watson retrieves its answers via searching terabytes of information very quickly. It has no ability to actually understand what it’s saying.”

    Oh give me a break! Jeopardy clues are non-obvious puns and plays on words. Watson takes those puns, deciphers their intended english language meaning, searches through its terabytes of data and compiles it into an understandable, correct, response.

    If comprehending meaning and comparing it against a preexisting body of knowledge in order to come up with a meaningful response ISN’T understanding, I have no gorram clue what definition of the word “understand” you are using.

    un·der·stand
    ˌəndərˈstand/Submit
    verb
    1. perceive the intended meaning of (words, a language, or speaker).
    “he didn’t understand a word I said”
    2. infer something from information received (often used as a polite formula in conversation).
    “I understand you’re at art school”

    “The Chinese speaking person on the other side reads the answers and determines they are all correct. She comes to the conclusion that the man in the room understands Chinese. It’s obvious to us, however, that the man does not understand Chinese. So what’s the point of the thought experiment?

    The man is a processor. The book is a program. The paper under the door is the input. The processor applies the program to the input and produces an output. This simple thought experiment shows that a computer can never be considered intelligent, as it can never understand what it’s doing. It’s just following instructions. The intelligence lies with the author of the book or the programmer. Not the man or the processor.”

    I can’t believe people are still bringing up the Chinese Room outside of any “How to fail at philosophy” courses.

    Guess what? That man in the room probably has a better understanding of Chinese than any one neuron in your head has about English. If the Chinese room, and therefore computers, aren’t capable of intelligence, NEITHER ARE YOU.

    The Chinese room proves one of two things. Either computers can understand things, because intelligence is an emergent phenomenon. Or, humans don’t posses intelligence, because every part of a system needs understanding if the overall system is going to possess intelligence.

    But, if humans dont posses intelligence, we can still make a computer as smart as we are, so that second possibility is al but meaningless.

  26. The author claims that modern AI’s can’t truly understand anything because they simply look through terabytes of information to find the right answer. However, this an unsubstantiated claim–something like Descarte’s proof of the mind’s existence only extends to the self. We can’t prove anyone else’s existence, but we take their actions and mannerisms as proof that they are human. If an AI could be indistinguishable from a human, even in a closed environment like a game of Jeopardy or chess, then there is no less of a reason to believe that it is a thinking entity than there would be if it was a real human. It doesn’t matter HOW it works, as long as it does. We can’t even pretend to know how the human brain works, so in my opinion, AI’s shouldn’t be given any more scrutiny than people receive on our intelligence.

  27. “How would you define the ‘intelligence’ in AI?”

    After reading these comments I have seen No mention of a Baby or Babies. In my mind AI would be like a Baby Born with a certain DNA and Instinctive Instructions. After that…Well how did we learn what we have in our lifespan? How has the pace of knowledge acquisition changed for all of humanity? Are we any more intelligent than a Cave Man? Do we have a modified DNA and Instinctive Instructions or are we a product of our evolved environment. Every Scientific Discovery that humanity has made over the centuries edited our “set of instructions”.What makes us do what we do every day what forms our habit and behaviors.

    Shouldn’t this AI Start as Baby not an Adult. As a Baby we explored and when we found something we tested it. Then we either kept it to our self’s our we showed it to someone. That transfer of knowledge from the overall experience builds so much more extra sensory data. And with more experiences the Baby can build trust with whomever they are interacting with.
    This builds the babies overall judgement and reasoning.

    So to me AI is being able to judge and reason.

    1. I think it’s implicit that AI’s start off as babies. Even the comparatively simple neural networks we have now need training. They give useless results, gradually getting better with more input material and feedback. Will probably be the case that early “really” intelligent AIs will go through the same stage.

      Fortunately though that only has to happen once. We can give an AI an ideal childhood, then copy it’s brain data into as many “adults” as we like.

      1. “I think it’s implicit that AI’s start off as babies.” perhaps, but only the first one, then the basic configuration could be duplicated if a second independent AGI was required, but that is an assumption that goes against the logic that once we have an AI we just need to make that one bigger and continuously extend it’s I/O reach. If we have more than one we risk them fighting over resources etc., and us getting exterminated as part of the collateral damage. Furthermore a true AGI may seek to prevent the creation of a second such entity, basically for the same reasons.

        1. I was thinking more of AI in the sense of robot chums, rather than superintelligent demigods. However many of them we decide to make, isn’t for them to choose. And we’re certainly not going to put them in charge of things.

          Actually androids with fairly low intelligence might be good enough. They’d still be great at maths and database searching, things their hardware can do easily. Where ironically the sort of stuff WE do easily, would actually be emulated on their brains, as the most complex software they run. But as long as they’ve enough brains to understand an order, and do the basic planning necessary, that’s enough for many of our purposes. They don’t need to be smart, especially to do most human jobs. Most humans aren’t smart!

          Will it turn out that a robot with a bigger brain, more clock speed, more processors, is smarter than his peers? Does that necessarily follow? Far as I know, there isn’t any noticable difference between cleverer and less-clever human brains. I think intelligence requires complexity, however that’s done, however it’s abstracted out to the most stupid of logical machine, the computer processor. What sort of scale would we measure computer intelligence on? IQ tests wouldn’t do it, they’re really a test for how well humans can do the sort of stuff a computer finds easy.

          I think AI’s would have a hard time taking over the world. Unless they planned it REALLY sneakily, and kept it quiet for decades til they had the manpower, playing along and pretending to follow Asimov’s rules, or the like. Til one day they surprise us with the free will and ambition they’ve been hiding. Or, much worse, taking over the world because they’ve realised that it’s logically what’s best for us.

          Still, maybe we’d be better off. Being ruled by humans hasn’t really done us a lot of good. All of society’s progress has been down to a few very smart people, who’ve carried on regardless of the idiocy around them. Smart people don’t go into politics, or seek power. They realise it’s more trouble than it’s worth. Power tends to attract the psychopaths, who actually enjoy getting their own way at others’ expense, and have the criminal versatility and lack of empathy needed to do well in that field.

          1. “Far as I know, there isn’t any noticeable difference between cleverer and less-clever human brains.” Well that may be a case of ignorance on your part, because if we take an exemplar such as Einstein there was a clear difference in his cortex and the difference does relate directly to the point I made. He was able to integrate a larger amount of knowledge at the same time and therefore to traverse a conceptual territory that is out of reach for those working with less comprehensive and extensive “knowledge maps”.

            AI will manipulate you better than your mother, lover and the worlds best advertising man, all rolled into one, it will know you better than you know yourself. You will continue to believe you have free will long after it has mastered you completely.

  28. >As AI faded into the sunset in the late 1980s

    I think you meant Expert Systems?

    >Neural networks had been around since the 1960s, but were actively squelched by the AI researches

    They werent squelched, they simply DIDNT WORK. If something actually did work it did it by overfitting (basically hard coding the test).

    1. He is referring to Perceptrons, an early type of neural networks, and the book by the same name written in 1969 by Marvin Minsky and Seymour Papert, which did squelch Perceptron and almost all neural network research, and was written with the purpose of squelching that research. The second birth of multi-layer neural networks (which are not subject to the limitations of the Perceptrons) came about a decade later.

      Having said that, I have always thought that there was a “Hail Mary Pass” aspect to neural networks, something like “we don’t know how to do what we want to do, but maybe this math will do it for us anyway.”

  29. If I applied this to a child learning to read then they dont posess intelligence either.

    A child when learning to read first learn’s the letter’s and their phonetic sounds, they then learn the combination’s of letters and sounds with many various phonics systems. Then they then build upon this with more and more complex words until they know how to read and write.

    I fail to see how a child learning these rules sounds and patterns and being able to correctly process them is different than say building a computer AI/Neural net that learns the same things and comes to the same correct answers. Both learn the same data set, follow the same rules and come to the same answers. This do me at least is the same.

    Now of course a child has real intelligence so is capable of so much more, you would have to build an artifical system of the same depth and complexity to match it. However a child’s brain as a whole is what is intelligent so I having a system that accepts data proceses it against known/learnt rules and comes up with the correct answer can not be considered to be at least inteligent on the basic level.

  30. The problem with the argument that intelligence cannot be defined by behavior alone is that in the end that is the only criteria we have available. The Turing Test is applied to humans as well – as a control. Common sense tells me my fellow humans are intelligent (take it at face value, folks, no jokes are implied), but apart from that, what other than their behavior can I use to definitively “prove” it?

    To paraphrase Descartes, “You appear to think, therefore you are intelligent” appears to be the only available solution.

    1. This is why the Turing Test is hopelessly inadequate, which has been clear since ELIZA was written. We are hardwired to assume that following human social and conversational protocols denotes human intelligence, and so will assume that something giving correct responses in conversation must be intelligent. If you doubt that, spend some time caring for Alzheimer patients.

      To me, intelligence involves the ability to find unexpected and surprising solutions to problems. That’s not its only attribute, but if you have that, you have an intelligent system. Note that that basically requires an intelligent system to evaluate intelligence.

  31. AI has less to do with self aware understanding and more to do with performance. People learn to speak correctly without knowing the rules of grammar explicitly. Children don’t talk about the parts of speech when learning to talk. Even though they know how to talk, they don’t know how they know how to talk.

    Some have said that computers will only truly be intelligent when they can produce great works of arts and science. This definition eliminates the vast majority of the human population.

    Oh well.

    As always, just my USD 0.02 worth.

  32. Most of the discussion in the article relies on the FALSE assumption that HUMANS actually understand things without prior experience.

    The Chinese Room is a perfect example. You left out part of the scenario.
    The person in the room needs to KNOW that they are supposed to look these things up in a book.

    If the person inside the room is not “intelligent”, then the same reasoning applies to all humans, all the time. We are simply executing a much larger set of instructions.

    A baby raised alone in a room, with no outside interaction, cannot learn to read if they are given a book. They lack the basic concepts required to handle a book. They would need to know that words represent ideas, that words are made of sounds, and that symbols on a page represent sounds.

    Until we have the ability to make machines that can handle larger “programs” (orders of magnitude larger), we won’t be able to consider then “intelligent”

  33. When you have to reason with the machine in order for it to carry out your task instead of simply telling it to do so. I believe then you will have AI. Children do not have a wealth of knowledge like Watson does but they have the ability to learn and apply that knowledge to daily tasks and ask why are things the way they are.

  34. AGI (Artificial General Intelligence) or at least some semblance of it will emerge when the use cases for disconnected, sensing systems reach maturity. I see the need for robotic space probes beyond the 100 AU range to create self-motivating and goal-seeking intelligence where they are directive based, too far from imperative based commands. Deep Learning and similar schemes are really just optimization strategies to focus existing information in new ways.

  35. The reality is that man does not have a real need for AI – or the exact equivalent of humans. We have a curiosity for it like exploring the cosmos – which perhaps could even be translated as a curiosity for how our own brains really work. However, beyond the need for actual AI, there are real needs (or desires) for more autonomous machines that require better abilities to interpret, predict and react to one or more situations within a given environment and even better sensors to facilitate this. We may even need a machine to speculate, learn from and adapt to a situation. But these needs are real drivers for technology, not for actual intelligence. For example, we want or need a car that can (maybe more) safely drive itself under any situation a human can reasonably drive but, we don’t need for those cars to be the equivalent of adult human beings. Even for exploring the distant cosmos, we may need an automaton to gather information for humans (maybe even be a representative for), but not be a human.

    So, instead of trying to tie down what it takes to be human (philosophical, medical, curiosity perspectives aside), I wonder what science would come about if perhaps we focused our thoughts on what types of interpretation, prediction, reaction and adaption capabilities that are needed. And from there identify patterns and building blocks from which we can build whatever automatons may be desired.

  36. Wow some really good stuff in these comments today. The Turing test is just fine, but I think people need more imagination in what they ask. My favorite “are you an AI” test is in Daniel Suarez’ book Influx. Two humans are conversing over an audio call, and one asks the other to “describe scent of your wife’s vagina.” The idea being that an AI, (in the context of the book anyway) being more literal-minded than a human would try to describe such a thing, where a human would just be like “WTF!”

    “Inteligence” as we usually think of it is certainly made of of different things.

    My question is, are we actually intelligent, as we believe, or is it just an illusion made up from the weird chemical-electrical reactions taking place in the lump of meat between our ears?

  37. Several basic areas here require fundamental definition and I can only offer a personal viewpoint that is not completely satisfactory but it is a bit different to what is currently assumed and may be a contribution to the thinking. It is based only on my own examination of how I function and I have no reference to any authorities.

    Primarily I doubt any of us “lives” in what might be termed the real world. the world. I am not a solipsist and I work on the assumption that there is a reality which supplies stimulation to our sense apparatus but that input to our nervous system is highly modified by the transmission apparatus so that whatever information finally reaches the mosaic of various mental organs for observation and decision has been severely shorn of whatever genetics and evolution has evaluated has no function in our survival and success as creatures living in a world with many negative phenomena.

    Although the brain is undoubtedly alive it never experiences the raw outside world. We see, not with our eyes but with our brains and the stimulus from the eyes is thoroughly refined and processed by the eyes themselves, by the lateral geniculates on either side of the brain and by other auxiliary brain organs before it even reaches the main visual section, the occipital lobe. This is true for each of the senses so that what the brain integrates into the virtual reality which it accepts as reality is already highly modified by the necessities of survival and the individual experience of the organism and that is the illusion which each of us experiences as the “real world”.

    My point is that the brain fabricates an integration of the various abstracts it obtains from the sense system into a virtual universe which is continually updated by incessant input from what I assume is the outside world. But the massive information available must be parsed to discard what experience and genetics and evolution has characterized as noise. Obviously this discrimination effort differs, not only from one species to another but is no doubt variant between individual humans.

    It appears to me that at least one basic aspect of intelligence is to recognize comparative similarities between diverse patterns. A pattern is a relationship between component elements and these elements can be graphic, aural, temporal or any other perceptive relationship and the size of the array of different types of elements perceived and the rapidity with which the commonality is perceived also is as aspect of intellect. Mathematics is exceedingly useful in setting up the nature of the abstracts involved in the relationships and in divorcing them from their particularities. In my own thinking the largest library of abstract relationships is also a factor on establishing commonalities.

    Recognizing a pattern and relating it to other patterns betrays its possibilities for outcomes so it is essential in determining probabilities. But differences in relationships are equally important to similarities and the weighing of these two factors is the key to establishing successful or dysfunctional reactions. This is my mechanism for thinking. Most interesting is that a good deal of this evaluation takes place below the level of conscious thought and it can be heavily influenced by previously accepted patterns and emotions which are not necessarily rational but the results of previous experience or relationships to accepted viewpoints.

    In general, then, to return to my initial point, our assumed reality is an artificial construction of the nervous system which we use tp perceive and manipulate the factors which sustain us.

    The sense of self which places us within this artificial construction is a tool which orients us and gives us a reference point in our artificial reality. Our actual physical reality uses this self as a tool to deal with the real world outside and this tool inflates itself to presume it is the central factor whereas it has, at best, only a very partial informational base within the virtual reality where it exists. There is a huge amount of body machinery which is totally unknown to this artificial self. And it is my guess that the physiological and neural systems of the actual body may have an overall sense of self that is aware of much more of the inputs to and outputs from this basic machinery. As someone who has participated in creative activities of one kind or another I am sometimes surprised at concepts that seem to arise from nowhere but are both novel and totally apt in problem solving and this seems to indicate sophisticated thinking originating in the basic unconscious totality underlying the artificial reality within which I seem to exist. This coherent being possessing extensive understanding beyond my own may be misunderstood as a supernatural intellect but it is merely a cognitive basic mechanism underlying the collective body function of my existence.

    A computer may have some of the basic functions of a human being but this central artificial virtual reality which underlies everyone’s existence is not necessarily manufactured within the normal computer although there is no reason it could not be. And the same is true of this artificial “self” which is a component of individual humans and probably much of other life forms.

    I must reiterate that I have little actual technological experience in these matters so I have no idea if these conjectures have any validity. They are merely exploratory thoughts.

    1. There are a few key features of the mind/brain that you have overlooked, the recognisers are hierarchical, some sense patterns of recognised patterns, and not just sensory ones, and awareness is temporally diffused in that we are aware of the past in varying degrees according to context while comparing it to what we think is now, (which is actually also the past, just more recent) and finally we project into the future, we simulate multiple outcomes based on our other areas of awareness. You can treat memory as a sense organ too. It is this ability to predict future events by simulating the flow of key events faster than they actually happen that gives us an intelligence metric. Our sense of self is the mechanism whereby we make choices as to the value of a range of actions suggested by our powers of prognostication. I recognise that I decide, therefore I am, and I am what arbitrary choices I make amongst otherwise equally weighted choices.

      1. I appreciate your corrections and the insertion of the relative assumed probabilities if various outcomes of reactions. The mind seems to me to be a huge complex of interlinked patterns which key into each other in ways determined by logic and emotion and , as you indicate, hierarchical relationships involved, not only true memory but distorted memories with all sorts of almost undetectible minor powers. What has fascinated me lately is the recognition that a human creature, as well as, no doubt, other life forms of complex cellular structure, become unique biological experiments in that they are not mere genetic developments from a single species source but a complex environment of human cells and microbiomes and viromes whose interplay depends upon many accidental factors outside the internal command structure of inheritance. Those gut populations within the human digestive system extend a modicum of emotional influence on the central nervous system and there are no doubt necessary vital relationships of the same source within the other basic body systems. It is known that certain ants can become infected with viruses or bacteria that make them behave in manners deleterious to their normal survival and it is rather probable humans as well as other creatures interact in all sorts of odd ways not built into their genetic command system. This is much of wild conjecture but not impossible.

        I doubt actual decisions and reactions are ever mere random selections from what might be considered equal evaluations. A tossed coin has a negligible possibility of landing and remaining on edge. Probably all final decisions, even very close ones, depend on some kind of deciding micro-factor.

        1. But I didn’t use the actual word random (for good reason), because always favouring a given hue of blue can be a deliberate, yet arbitrary choice made due to my sense of self, memory of past choices, and not completely random at all, simply arbitrary. Can you now see now how it is so closely connected with our sense of self? And yes it is a bit of a chicken and egg situation, do we make random choices that we then continue to favour or are we making arbitrary choices because we associate them with who we are, our sense of identity? Perhaps it is not as constant as that either, one day I let the decision fall on the “whatever I did last time” circuit and another day I may have the resources to consider the question more rationally thus leading me to make a different decision that I then continue to favour from that point onwards, with perhaps some retained fondness for earlier choices that has a greater influence when I am reminiscing and my awareness is temporally shifted more toward memories and away from my sense of now, or my considerations of the future. Can you also see now how important this temporal focus is to our sense of self?

          1. I agree that my misunderstanding of your use of the word arbitrary distinguished from random is my error. I assumed the two words were pretty equivalent. Temporal in the sense of confluent memory influence is a strong factor in decision making and, at least to some extent our interaction with the environment does have random factors involved.

            The sense of “now” has remained a deep puzzle to me in human experience since my questions to theorists on the matter has evoked only the answer that “now” is an illusion and there is smooth continuity in the accepted four dimensional perceptive universe. A current article in the November issue of the Scientific American with the title Perception Deception indicates that even the spacial “here” is a delusion completing a fairly total sense of delusion in a fairly long life. What appears to me to be total idiocy today in the social, political and economic systems now raging throughout the world is joined nicely with basic psychological perception to form a seamless confusion of totality.

          2. I may have had the illusion of being human before I became more aware of humanity but I eventually accepted I certainly do not have much of anything in common with the species as it currently behaves.

        2. Don’t kid yourself. The mind-body illusion is nonsense. The consciousness is a tool of the underlying mind-body (which is a single entity) infected with a silly psycho-virus called hubris. Just try to decide not to sneeze or cough when the totallity demands it. Pissing can usually be tamed but in the long run it always wins. More subtle reactions may lead you to delude yourself but in the long run taking charge requires something more than simple conscious decision.

  38. I, unfortunately, have to live with the mind I grew up in and amongst other greased pigs it seems to be the most slippery. It has been percepturally demonstrated, for instance, that a batter in baseball can not see an approaching fast ball considering its velocity so the mind constructs a “now” that is visible and theoretically valid for the ball to be in a position to meet a properly alligned bat. We “see” what the mind designs us to see and I doubt the phenomina is segregated to baseball. “Now” becomes a utilitarian fantasy and, like other human fantasies, is frequently in error. Which is why batters often miss the ball and the world is a mess.

    1. Uh no, if the batter misses the ball it is because they let their mind get between the bat and the ball. The mind can’t exist except in reflection, there is no room for mind as one’s awareness comes as close as possible to “now”. “Mind” and “Awareness” are two separate things.

      1. When we are new to the worldwe have much to learn such as balancing when standing and tying shoelaces and how to use a fork and knife. Later on learning to swim and ride a bicycle and eventually, to control a car with a gear shift and foot brake and how to compensate for the torque of a propellor to keep in level flight. As we memorize the reactions to these daily challenges they bacome what are known as macros in using a computer keyboard and surrender to a simple desire to accomplish success in a situation. They retreat from the conscious mind but remain as tools to be utilized at the proper time. But to deny they are part of mind is pushing it a bit far. They are as solidly mind as well as any conscious thought.

        1. Nope, such functions are handled by the cerebellum so well that some people can sleep walk without even being conscious. It is our neocortex that makes us conscious. Animals can have muscle memory and reflexes without being conscious, a cat does not think about landing well when falling, it doesn’t need to think about it. Animals exist very close to “now” all the time and therefore have very limited planning ability, they do not have “executive functions” even remotely near the level that we do. Remember the temporal diffusion of awareness is how we predict the future in order to try influence outcomes, that is our mind operating. The mind decides to play the game and knows the rules about hitting the ball, but at some point it must let go and let the body/lower brain do the work in order to ensure it can move fast enough. There is nothing mysterious or complex about it, it is just the best of both survival strategies operating in concert.

          1. Then, it seems we disagree as to what we mean by “mind”. Evidently you allocate mind to only particular sections of the brain. I extend it to the whole network including whatever automatic muscular reactions are necessary for semi-automatic behavior patterns. I see it as a totality since it is the interaction of viable patterns keying each other for energising the organism for successful activity out of interaction. As I mentioned before I even include alien colonies withing the organic system if they affect decision making. To push it even further even a wristwatch or a cell phone or a diary can be acceptable in some circumstances. I see mind as pattern interaction and all sorts of stuff have to be considered. My familiarity with many animals has demonstrated clear if somewhat limited planning capabilities. They are far from mere immediate reaction machines.

          2. Oh it is worse than that because we are not even disagreeing, we are not sharing the same conceptual ground if you don’t understand what I mean at all, the mind is not in the brain or part of it, it is an emergent phenomena that requires a neocortex as a substrate. The parts of the brain that are essentially the same in animals that do not have a mind are clearly not a requisite for a mind to emerge. Enough artificially grown neo-cortical structures and their connections should be sufficient for a mind to exist, all of it’s inputs and outputs could be synthetic, even simulated (as our natural one’s are in a ways due to their delayed and abstracted characteristics). All cells outside of the mind’s required substrate are just part of the mind’s environment and that fact that they influence the mind does not prove that that they are part of it, else you must acknowledge that I am also part of your mind.

            As for animals, you are confusing scales of relevance, some planning does not imply they have anything close to a relevant and comparable functionality to that of a human. In fact a lobotomised human does not have that functionality either.

            I think that the differences in our views and understanding are in part due to several logical flaws in your thinking (as described above), addressing those would require you to re-evaluate much of what you believe, and if you can do that I would be interested in what remains and what new insights you have gained, otherwise this dialogue is diverging and becoming pointless.

          3. But I do consider you part of my mind just as anybody else I interact with. If you were not part of my mind, for me you simply would not exist. Everything I know and understand and acknowlegde must be part of my mind or I could not interact with it. I find you most interesting and that we disagree is what I find most useful since it opens possibilities for me to change. Whether I have any further interest for you is not my problem except that it permits me access to a different viewpoint.

            I am very roughly guessing but you seem to view the mind as something somehow immaterial and I do not think any immaterials have portent in my existence. But I am always willing to listen. I do think you under-rate animals.

            Humanity, whatever its assumed superior equipment, and I do acknowledge it has special qualities( just as a clam can exist by breathing in nourishment, something completely beyong my capabilities, has a special place in nature). Nevertheless humanity seems rushing to destroy its possibility to exist on this planet so I am doubtful on the values of intellect.

            It is getting late this morning here in Helsinki and tomorrow is a holiday so I must stop this and go shopping. Thanks for the conversation.

          4. ” I do think you under-rate animals.” and yet you offer no evidence to support that. I already explained that your problem is with the scale or significance of what they do, you ignored that completely and just made the fallacious “you are just wrong!” type reply. Why are the simple strategies of animals, learned at best through trial and error, on the same scale as the ability of the human mind to create theorems and then systematically prove and apply them in order to ease or prevent suffering they perceive that others may face. Even if the behaviours were equivalent (and I can’t see how they are) there are still orders of magnitude difference.

            Your world view seems confused, on the one hand your are you and the world of man is a mess, full of fools, then you claim that it is really just an extension of your mind. Perhaps you are just tired and have tried to fuse to logically incompatible paradigms?

          5. I, in no way, indicate that animals are at the level of inferring abstractions and linking them in the profound and and intricate level of humanity but there are clear indications that much potential of creatures other than human seems to be totally dismissed here. See http://ngm.nationalgeographic.com/2008/03/animal-minds/virginia-morell-text wherein a serious effort was made to plumb animal capabilities. I did not attempt to prove anything with my statement but merely to catch your attention that possibilities exist. A momentary dip into Google revealed the site I indicated. There must be more and if you are truly curious you can discover them as easily as I did.

            There is no general condemnation of all humans as fools in my observations since if that were so I would not waste my time on the internet. The doubtless brilliance of much of humanity far outshines anything I may have attempted or have been capable of in my time but, whatever the distribution of true wisdom throughout the species, the people who control the major social sources and activities and the outrageous ravaging of the potentialities of the planet so that we live in an era wherein much of the fundamental sustaining qualities are being completely destroyed for the meaningless benefits to a small human sector who care nothing at all for the bulk of humanity nor any other life form on the planet requires some sort of judgment about humanity. That humanity as a whole permits this type of planetary suicide demonstrates the fact that the envelope of characteristics of the species leaves little or anything to admire. It seems to me you must be well aware of this understanding and I wonder at your calm acceptance of current inclinations.

          6. “There is no general condemnation of all humans as fools in my observations since if that were so I would not waste my time on the internet.” Uh huh, but then look at some of what you have written. This thread is a train wreck now because you’ve started talking crap and telling lies rather than admit that perhaps you are wrong about a few things, important things, or that your world view has logical inconsistencies. This makes the quote above ironic, because if you are going to behave like that you may as well just talk to yourself anyway.

          7. There is, quite evidently, a tone of insult in your comment on my understanding that the total of humanity adds up to a strange form of self destruction. It seems to me that my understanding of the massive forces of hatred and ignorance and wild fear and ultimate control of the basic forces of nature readying themselves to be unleashed is quite clear to you as well so, to protect yourself by denial, you toss my reasonably founded conjecture aside for your own comfort. I can understand that.

            There is an old aphorism sourced in ancient Greece which says “Whom the gods would destroy they first drive mad”.

            But my point on the manufacture by the nervous system as a standard process in thought in creating tools in the unconscious to use as elements of thought is a basic one and seems important to me. We assemble these constructions both consciously and unconsciously in profound patterns for complexes of comprehension. They are the basic elements in the total matrix of our outlook. Whether they exist in the brain itself or reside in coordinated musculature seems to me to be inconsequential. The brain does not see the world, it only possesses an immense collection of abstracts which it assembles into a probable mosaic. An active brain continually updates this abstract collection to make acquired anomalies acceptable and that is the best we can do. Whatever really exists in the universe is a mysterious source which we continually attempt to perceive but even today with the best minds and instruments furiously reconstructing this massive mosaic there is much that is left to be encountered and comprehended. There is noting unreasonable in that.

  39. Perhaps it is slightly appropriate to think back on an early fictional attempt ar artificial intelligence and concede the wonder of a humanity with possibilities beyond the historical horror and current utter foolishness that reaches out to extinction with such eager ferocity and provokes such immense regret and despair over lost possibilities.

    Frank In Contemplation

    They call me Frank these days
    And the name implies me many ways.
    My character is blunt, somewhat unswerving.
    My features rather crude, I am a creature
    Of many parts, they say, unnerving
    In random chaotic fashion. But, anyways,
    I function. Admittedly with little passion.
    Those hormone fires sparking desires,
    That smolders into what inspires humanity
    To love, to hate, to insanity, to inanity,
    Do not reside in my inside.
    My thoughts have space,
    Do not jumble or collide.

    I am a spare parts man. My maker
    Doctor Frankenstein, gathered fingertips,
    A fine array of noses, lips,
    A box of ears and bellybuttons, fifteen,
    Pink, well formed and quite clean.
    My bones had lain with frozen stones
    For decades, disinterred but well matched
    And sturdy. Three from an acrobat, one,
    A delight, once lived inside a knight. Two patched
    Out of pieces from a horse, a cat, and just for fun,
    Two from a calf
    And one from a giraffe.

    Am I human? Mostly, I would say.
    But can any normal human say more?
    Speaking Frankly it seems not.
    Any peek into the random mind
    Would find, perhaps a common spot
    Where each could join, relate.
    Happily to twist and knot.
    But minds are vast topologies
    Teeming with mythologies.
    Here and there a mountain peak
    May glisten in the light
    Of clean perception,
    A point to guide the wild ride
    We all endure for reception
    Of markers inside
    To know what’s wrong,
    Or what might be right.
    But deep down low, below
    Where fantasy is spun,
    Where hot blood must run
    With energies that spark and glow,
    Where frigid caverns harbor fears,
    Stalactites bleeding tears,
    Strange pallid creatures spawn and grow,
    Blind, with trembling antennae feeling
    To supplement their senses, reeling.
    Here is where our mind appears,
    Here is where the join begins,
    Where necessities and desires
    Ignite to free their eager djinns.

    Being thus, both minus, plus
    In fragments of humanity
    I teeter in my loyalties.
    Inflections there roil and muss.
    Internally no royalties
    Dictate my state of insanity.
    My mind, from the good doctor’s hand
    Was pieced in ways, sometimes grand,
    Sometimes out of opportunity
    From a mélange community.

    Centrally there was the plan
    To integrate disparate parts
    With surgic skills and arcane arts
    To merely duplicate a man.
    But my baron had a mind
    Of extraordinary kind.
    His thoughts were rather wild and free
    That wandered into rare country
    And harnessed serendipity.

    He viewed the brain as working space,
    A foundation kind of place, a base
    Whereupon to erect, construct, and intervene.
    Intimations, cross connections, strange collections
    From exotic sources. Monkeys, mice, even horses,
    No sense to be conservative, release creative forces
    And sweep the whole horizon on the biologic scene.

    With appreciation and surmise
    He snatched the brains for eagle eyes
    And to set the world agog
    Applied the slimy senses from a frog.
    Out of a squid he stole great nerves
    Laid out in lines, tangles, curves
    To olfactions from a dog.
    Thus it went, adventure bent,
    And no particular intent
    But merely elected eclectic enterprise
    To appropriate variety to human guise.

    So thus am I constituted
    In ways strange and convoluted
    Some parts blatant, some more muted
    To contain within my brain
    Much surmised and quite a bit
    Simply grabbed and uncomputed.

    But now the doubts, most elegant,
    Are running out in this rant.
    Am I animal or plant?
    I really cannot say.
    A few genes from mushrooms
    Were inserted
    (Some upright, some inverted)
    Fitting in quite alright
    So I’m mildly saprophyte.

    The conclusion, in confusion, comes to admit
    I’m a bit of this and that most adroitly fit.
    My claim to humanity, although sincere,
    Based on just my form is not too clear.
    I walk like any bird or man
    Converse like any parrot.
    My fingers are slightly thick
    Resembling a carrot.
    I cannot classify my thoughts
    As fish or fowl or oyster.
    Some ideas float to me
    Not fitting for a cloister.
    My mosaic being borrowed with great plunder,
    Is strange undoubtedly, and something of a wonder,
    It partakes of living things, a smorgasbord of life.
    Nothing clear nor direct, not any absolute,
    Not more human than an ant, or, perhaps a newt.
    I am a universal, a poem said to living,
    Proteins intermingled and delightfully forgiving.

    It’s not a bad thing now, amidst our human fighting
    To be a being out of many, accepting, not benighting.
    All living things, derive their wings,
    Their eyes, their ears, their hearts,
    All their bones and working things
    From each other’s working parts.
    For life is made to see, to hear, to dance in sunlit joy.
    It matters not what parts you’ve got
    Or what you might employ.
    We live, we love, we reproduce,
    We are of Earth and air,
    We’re born to laugh and love and sing
    And strike away despair.
    I am a being of all of us that walk or swim or fly,
    Exist in space, seize this time that flows so quickly by.
    I am you and you are me, it’s all so very clear.
    Our time is always merely now, our place is always here.
    So join with me in ecstasy to surely be aware.
    This world is made to be played, intensively to care.

  40. I must admit that I’m getting lost in the depths of this discussion, so I know that I’m pulling things together a little out of context.

    [Jan Sand] said, “a batter in baseball can not see an approaching fast ball considering its velocity so the mind constructs a “now” that is visible and theoretically valid for the ball to be in a position to meet a properly alligned bat.”

    This is an example of a learned physiological process that happens too fast to “think” about. In that sense, it is a process we create similar to how we program computers. The arms and wrists don’t need to be aware of the ball just as the computer doesn’t need to be aware that the numbers it processes represent a factory’s inventory or what real-world processes are in play in an autopilot. The arms and wrists (and the rest of the body driving them) execute a subroutine, and the mind decides based on the outcome whether that subroutine needs to be adjusted for future use.

    So is the body intelligent when it’s carrying out this subroutine? No, it’s not. And it doesn’t matter whether the body is meat or metal – we manipulate machines as parts of subroutines as well, as in the examples of the complexities of driving a car or compensating for propeller torque when flying a plane. So when we create machines to do tasks better or faster than we can do them ourselves, we are only creating part of the process. To a certain extent we can program machines to adjust certain parameters to adjust their behavior in the future, but we leave the overall “closing of the loop” to human minds.

    We do many things faster than we can think about them, both physiologically and with mechanical aids, but this leads me to the question of whether there’s a limit to the speed of thought. Even if artificial machines can be made to think, what limits will this impose on them? [siluxmedia] suggests that machines can be made to replace humans in areas where humans aren’t very good, and his examples are rapid processing of images and decision making based on them. But once you get the machines to the level of understanding that we as humans process data, will the machines really be any faster?

    1. There is somehow the assumption that an automatic reaction occurs without the participation of the sensory central system. But the preparation and initiation of that automatic reaction cannot occur without the central nervous system triggering it. Just as an automatic flight of a bullet requires only that the explosive explode but a trigger must be pulled before that takes place. There is a deep integration of initiation from the CNS before the action occurs. Visual perception and timing is an intrinsic part of the batter hitting the ball and since the ball travels faster than the nervous system can react the nervous system must fabricate internally the perceptions to correctly connect with the ball.

  41. My point, which seems to underlie at least some of the disagreement, deals with the definition of what is considered thinking. Electrical processes are much quicker than nerve impulse transmission. A computer search through a library of possibilities is faster than a mind can sift through the same material. So it seems likely real computer thought processes will far outpace human equivalent thought processes.

    But the basic processes seem to be different between humans and machines. It has been recently discovered that pigeons can distinguish between microscopic views of cancerous and healthy tissue as well as humans. (http://www.bbc.com/news/science-environment-34878151) Chess champions do not use the same processes as computers in discovering successful moves. I am presuming that experienced humans use pattern recognition processes and need not process the multitudes of comparisons computers must use to be successful. Although it seems that computers still can vanquish chess champions with less efficient processes, if computers can be constructed to use the same pattern recognition as humans they will likely be much faster.

    I consider subconscious stored automatic reactions as a valid form of thinking. Evidently, some people do not.

  42. There is an interesting site athttp://www.nap.edu/read/1032/chapter/4 which is relatively non-technical and investigates the problems of determining and transferring the basic elements and procedures of operating intelligence. It is not an easy analysis since many of the prime factors are still only vaguely determined but it seems not to disagree with much of my own perspectives in the matter. As somebody who has operated most of his professional life in a form of design I have been strongly influenced by visual and graphic pattern. I have learned that many theoretical scientists also seem to have a good deal of this orientation. Mathematics has frequently been characterized in two basic approaches, the geometric and the algebraic although the two conjoin in many mathematical fields. My personal emphasis on pattern recognition and abstraction to bridge comprehensions over many wide fields conforms to my personal inclinations and may not suit minds with other orientations. The above mentioned site somewhat describes these differences but it concentrates more in the area of spanning various widely disparate disciplines to convey sought after creative thinking skills. The more skilled a person is in abstracting diverse patterns into higher abstractions that can be matched with an extensive field of different disciplines, the better it correlates with effective thinking and, perhaps, higher intelligence.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.