Ask Hackaday: The Turing Test Is Dead: Long Live The Turing Test!

Alan Turing proposed a test for machine intelligence that no longer works. The idea was to have people communicate over a terminal, with another real person and with a computer. If the computer is intelligent, Turing mused, most people will incorrectly identify the computer as a human. Clearly, with the advent of modern chatbots, that test is now broken. Despite the “AI” moniker, chatbots aren’t sentient or even pre-sentient, but they certainly seem that way. An AI CEO, Mustafa Suleyman, is proposing a new test: The AI has to take a $100,000 budget and earn $1,000,000.

We were a little bemused at this. By that measure, most of us aren’t intelligent, either, and it seems like this is a particularly capitalistic idea. We could probably write an Excel script that studied mutual fund performance and pull off the same trick, given enough time for the investment to mature. Is it intelligent? No. Besides, even humans who have demonstrated they can make $1,000,000 often sell their companies and start new ones that fail. How often does the AI have to succeed before we grant it person status?

Alan Turing never imagined chatbots when he proposed his famous test

But all this begs the question: What is the new test for sentience? We don’t have a ready answer, and neither does science fiction. Everything from The Wizard of Oz to Star Trek has dealt with the question of when a robot or a computer is a being and not a machine.

Sentient AI and Pornography

As Justice Stewart famously said about pornography, “I know it when I see it,” perhaps the same is true here. You would be hard-pressed to say that Commander Data from Star Trek was not sentient, despite the ongoing in-story debate. But why? Because he could solve problems? Because he could love others? Because he could grow and change?

This is a slippery slope. Solving problems isn’t enough. You need to solve problems creatively, and that’s tough to define. Dogs love others, but you don’t consider them truly sentient. All sorts of plants and animals grow and change, so that’s not directly it, either.

Claude Shannon Roots for the Machine

Claude Shannon once said that it was clear machines could think because we are machines, and we think. However, it is far from clear that we understand where that indefinable quality comes from. Shannon thought the human brain had about 1010 neurons, so if we built that many electronic neurons, we’d have a thinking machine. I’m not so sure that’s true.

That could be akin to someone unfamiliar with the concept of a computer saying, “Of course, a motor can do computation because your printer has a motor in it, and it also clearly does computations.” There may be mechanisms at work in our brains we do not yet fully understand. For example, the idea that our brains may be quantum computers is enjoying a resurgence, although the idea is still controversial.

IBM and DARPA have been building brain-analog computers for several years as part of project SyNAPSE. At last report, there was still no true electronic brain forthcoming.

Over to You

So what do you think? How can we tell if a computer is sentient or just a stochastic parrot? If you can’t tell, does it matter? For that matter, can you prove that any other human being other than yourself isn’t a clever simulation? Or, for that matter, can you prove it about yourself? Perhaps we should ask ChatGPT to develop a new Turing test. Or, fearing the implosion of the universe, perhaps not.

89 thoughts on “Ask Hackaday: The Turing Test Is Dead: Long Live The Turing Test!

    1. Indeed. “Sentient” means “possessed of sensation”, meaning that anything that experiences any kind of qualia is sentient. It seems VERY LIKELY INDEED that a dog has qualia.

    2. I do as well, but in my opinion a dog is a borderline case with some dogs showing awareness and others just not.

      I think in evolutionary terms, there’s a continuum from “biological robots” like insects to sentient beings like people, where behavior is at first rigidly defined by “instict” and then little bits of intelligence start to appear, first in isolated functions and then merging into a wholesale awareness.

      However, there is a barrier at the point where instinct becomes intelligence that is not easily passed by evolution, because it breaks the simpler behavioral mechanisms: failing to follow e.g. rigid mating behavior because your brain gets distracted by other things will see you eliminated from the gene pool. Even for people, if you’re playing golf and you start to think about your swing, you’ll miss the ball – thinking about it disrupts the muscle memory you’ve developed – being intelligent can be disruptive and counter-productive.

      Simpler animals have to do the same dance every time and no second guessing, which is why you can’t just evolve bit by bit to become intelligent – you need some other selection pressure or a bypass to allow intelligence to emerge regardless, which is why I think intelligence is a bit rare and most animals fall on the dumb side of this barrier. They have no biological use for intelligence or awareness.

      1. Even for people, there are two modes of behavior: deliberate and opportunistic.

        Opportunistic behavior is when you simply react to your surroundings, which may produce behavior that looks intelligent but is simply cue-and-response in principle. Long chains of complex actions can be produced simply by responding to cues one after the other, while your thoughts are completely elsewhere. Road hypnosis is a good example, or buying food at the grocery store when you go in without a shopping list.

      2. While dogs may or may not be *sapient*, there is no question at all that they are sentient. All mammals and avians are believed to be sentient as their behavior is too complicated to be possible otherwise. It is mostly just invertebrates which are believed to be non-sentient, and reptiles/amphibians/fish which are up for debate.

        1. That is to be determined. “Too complex” is a poor metric when you’re dealing with the gray are in between. People can perform some pretty complicated behavior without a shred of sentience – I know I have, after enough beers.

      1. Perhaps both humans & computers are subject to the prison house of the body & as neither of us can create ourselves we share the same status of being created on this Platonic line? the mark of all created beings is in the capacity to change. Matter. Turings test is relevant but it shows our commonality that perhaps ai is an extension of us & that can only be good. Ummm Unless your a capitalist, & if so perhaps a paradigm shift could get a bit of priority on the old agenda of us with free will & all that is good for greater good etc & don’t worry as its all good no need to catastrophise about evil as that’s ptsd informed gratefully & merely a shift to lower realms that of course is temporary / good too & redemption will restore perfection. We’ve got this!!

  1. Testing for intelligence is a test on a sliding scale. Look at the folks you went to school with. Some will grow up to take gigs that don’t display a whole lot of classic ‘intelligence’ such as street sweepers, while others will grow up to be successful entrepreneurs, others astrophysicists, etc. Do we even know where the ends of the intelligence scale are? I’ve long, mostly jokingly, posited a scheme in which IQ points are a linear array, with each point having an associated action or understanding. Zero through five are things like breathing, digestion, sense of touch, etc, while an understanding of binary math is in the 130s, quantum spin states might be point 198, etc. How many of us know folks who have some serious top end points yet are lacking a few in the 50-70 range where ‘common sense’ or ‘social skills’ lay?

    So the question could be viewed on a linear axis such as this. In that case, chat bots might be heavy on the center of the bell curve, but lack a key segment in the area of self awareness.

    This to me, is the true nature of a modern day test, which is for sentience. This would include the ability to ask _new_ questions to which the answer is not know to the mind in question, and for questions which the mind has a _desire_ to solve.

    The large language models have none of this. They are simply a replacement for SQL. There is no internal drive to ask and to solve questions.

    1. I like the _desire_ to solve and/or plan as a criterion — goal setting. I’d add the ability to get bored and the need to play and/or create as well.

      Do LLMs wanna have fun?

      (@Derek: and by the above criteria, dogs are surely intelligent. Apes, dolphins, many birds, … people.)

      1. Intelligences have to be unique. When we talk about things being intelligent we talk about them “understanding” something. Understanding requires a unique perspective. Like, globally unique. I cannot say “I understand electromagnetism” and prove it by reciting Jackson’s classic textbook on electromagnetism. Note that this is the problem with using chatbots ‘passing’ human tests as a measure – those tests implicitly assume the thing that’s taking them has a unique perspective. If it’s not unique, it’s the equivalent of taking it an infinite number of times until it passes.

        That’s why I don’t see how you could ever have a chatbot be intelligent. They can’t be. They’re not unique. I mean, if you took a single ChatGPT instance and constantly kept talking to it over and over and over and it actively updated its training set on that, yeah, sure, then maybe you could consider it. But not as they are now.

        This was Lovelace’s argument against machine intelligence. Turing’s counter to it misunderstood the idea of what “novel” means.

        1. Why on earth would you assert that intelligences need to be unique? You’ve just created a criterion out of thin air and it’s nonsense. Do a simple thought experiment: if you made a copy of a person (doesn’t matter how, it’s a thought experiment), by your reasoning, the original is now suddenly not considered intelligent, despite being completely unchanged.

          Just because all the intelligences that exist now are unique because biological entities are born that way says nothing about that “having to be the case” for other types of intelligence.

          1. Using the same thought experiment, the moment a “copy” of some intelligence is instantiates, then it evolves independently so it becomes different, like identical twins, triplets, etc. Kind of like epigenetics.

          2. “You’ve just created a criterion out of thin air and it’s nonsense.”

            All criteria are created out of thin air. The word “intelligence” is totally made up. As for it being nonsense, I completely disagree: it’s from the etymology and reasoning behind the word “understand.” You cannot understand something without perspective.

            “Do a simple thought experiment: if you made a copy of a person (doesn’t matter how, it’s a thought experiment), by your reasoning, the original is now suddenly not considered intelligent”

            So twins aren’t unique?

            Stop thinking so three-dimensionally. If you create a copy of a person, before that time, there was one intelligence, and after that time, there are two.

          3. “Using the same thought experiment, the moment a “copy” of some intelligence is instantiates, then it evolves independently so it becomes different,”

            Yup. Exactly.

            I don’t even understand why it’s confusing – everything we associate with “intelligence” fits this. All the greatest thinkers gave new, unique views on things. We talk about people providing “revolutionary” ideas – these are ideas and things that are *unique*. If someone comes along and says “look, I figured out how to make fire with two sticks” we don’t go “wow, you’re a genius.”

            The entire *basis* of intelligence centers around things being new and unique, which is exactly why Lovelace argued against the Analytical Engine being able to think.

            And if you run the thought experiment, if you somehow created an exact copy of me *but the only thing it could do or say are things I’ve already said or done* – we’d call it a simulation, or a mimic, or a copy. The only way it becomes intelligent on its own is when it interacts and acts on its own. In other words – when it becomes its own unique copy.

          4. Maybe uniqueness is a bit too strict of a criteria, but certainly independently derived fits the bill.

            It’s not impossible that two people come up with the same idea, whereas if one person comes up with the idea and then tells the other, that other person won’t receive the qualia that created the idea. It’s not their idea, so they’re simply reciting it.

          5. >when it becomes its own unique copy.

            Let’s make a thought experiment out of that. Suppose we copy both the person and the environment. Let’s put you in a room with a puzzle that requires intelligence to solve, then duplicate the entire setup. Two rooms, two puzzles, two of you.

            If the conditions are exactly the same, they should evolve forwards the same. Even if we argue that both evolve non-deterministically, it does not rule out the chance that both happen to do the exact same things.

            Since there’s now two of you, you’re not unique, yet intelligence is being applied per the premise. Therefore, uniqueness is not a proper criteria for intelligence.

          6. Why would you say intelligence is used? If instead of two copies, it was one million: would you feel the same way? You’d see the same problem being solved identically, over and over. You wouldn’t think “oh, that’s clever.” You’d think it’s obvious, or a simulation.

            The only reason you might think it’s intelligence is if you knew everything was identical, and if it was, why would you view the two copies as separate? It’s just two views on the same thing: like you’re viewing a recording.

            You might note that I’m treating intelligence as something that’s imbued by the observer, rather than intrinsic.

          7. “Since there’s now two of you, you’re not unique, yet intelligence is being applied per the premise. Therefore, uniqueness is not a proper criteria for intelligence.”

            You additionally could create a second argument on this by pointing that while each individual is not unique, the *combination* of the two is unique, and so there’s still one intelligence involved.

            The important point there is not that an intelligence can’t be replicated *going forward* in time. It’s the entire set of events that can’t be replicated.

            Another way to think about it is in terms of signal processing. An IIR filter necessarily has a “hidden state” (which is what allows it to go unstable!) – while I may be able to duplicate the state of a human and continue going forward (even with the same information!) I cannot *recreate* the state *without* the state.

          8. >Why would you say intelligence is used?

            That is assumed for the sake of the argument. It’s a premise.

            > the *combination* of the two is unique, and so there’s still one intelligence involved.

            That’s one way of looking at it, but I think that’s just semantics. One does not depend on the other and both are supposed to be intelligent. Whether you look at them in combination has no special difference, so we may as well say they are intelligent separately, as they would be if the other did not exist.

          9. >The only reason you might think it’s intelligence is if you knew everything was identical, and if it was, why would you view the two copies as separate?

            Because that is the premise. You take one person and one environment, and split it in two identical copies. They are separate by definition, and both intelligent, without any logical contradiction – that’s the point of the thought experiment.

            That the two copies should be considered one is a different philosophical argument that says you can’t have two exactly like circumstances without them being actually the same, but that is already ruled out because we assume – for the sake of the argument – that we can. Why not?

          10. “They are separate by definition”

            What does separate have to do with anything? Why does an intelligence have to be physically distinct?

            I mean, any intelligent being – even humans – are only “distinct” in an abstract, average sense – they all consist of computing elements that communicate with each other. Creating two copies of an intelligence, separating them, and subjecting them to the same problem is fundamentally no different than two portions of your brain considering the same problem independently.

            The other point to note here is that the disagreement we’re having is definitional: there *is* no definition of intelligence, so you can’t say “they demonstrate intelligence.” I’m saying whatever other conditions you put to demonstrate intelligence, uniqueness is required in addition to that in order to match the *idea* of intelligence that humans have.

            Everything humans talk about when we talk about intelligence is about *individualness* and *uniqueness*. Einstein was brilliant because he came up with an unique understanding – if I explain Einstein’s theories exactly as he did, while I might *appear* intelligent at first, with enough interactions with other people, that “shine” would wear off – because you’d come across other people who *also* experienced those theories but had *additional* experiences that add more information.

            But like I said – this is a definitional problem. Some people may be completely content to define an intelligence in other ways which *would* make something like a chatbot intelligent. I do not consider the ability to solve problems or answer questions in any way a demonstration of intelligence.

            It’s a lot like the problem with defining “life” – the classic example of “is fire alive?” I do not believe that any definition of “intelligent” will work without the entity being unique – namely, being able to have information available to it that no other intelligence can fundamentally have.

      2. >> I like the _desire_ to solve and/or plan as a criterion

        I like it too, but can one say whether a line-following bot that actively seeks out a line to follow has a desire to follow that line? How does one measure desire?

      3. By that measure my parrots are certainly sentient. They have goals (sneak out of their cages at night), strategize (unlatch a food door but make it appear closed), adapt the plan base to unforseen circumstances (listen to make sure the coast is clear) then execute their escape. Their reward is startling the person who uncovers the cages in the morning, laughing mockingly them.

      4. The hard part is sufficiently good pattern matching and modelling of reality, so you can solve problems. If you have that, then it’s easy to “close the loop”, and add agency with motivations and desires.

    2. Agreed, I’ve met a worrying plurality of people that would /fail/ the conventional Turing test. In particular, retail customers.

      As to the original problem, I like the interpretation from the webcomic “Freefall” (which focuses quite a lot on the questions of sentience, it’s actually a great read), saying that one major qualification is the ability to consider the viewpoint of another entity.

      One of the main characters, a humanoid wolf (it’s complicated), decides to ask a few robots “What does your name /smell/ like?”. One of the robots, a descendant of today’s Spicy Autocomplete LLMs, basically goes “what the file system check is wrong with you organics” and walks off. Another robot, with a hardware neural net operating similar to an actual silicon-based brain, actually has a rather human-like train of thought about it. Dogs have great sense of smell, robots don’t have a sense of smell at all, so the robot can’t be sure that names /don’t/ have a smell, therefore the only way to know for sure what their name smells like is to ask a dog.

      1. Smelling usually involves sensing a variety of chemicals carried in the air and identifying them as different from each other. Although robots do not generally have this capability, it could be built in.

        As for what a name smells like, consider the phenomenon of synesthesia.

  2. The cynic in me thinks that a possible human equivalency intelligence test for an AI machine would be for it to have to create an original output with actual malice … i.e., the intent to cause harm.

    In the meantime, I think the biggest near term danger of AI like ChatGBT is its ability to be used by humans to create convincing disinformation … pretty much the opposite of AI’s original charter. Never underestimate the ability of humans to corrupt anything.

    1. “In the meantime, I think the biggest near term danger of AI like ChatGBT is its ability to be used by humans to create convincing disinformation”

      This came up in the early days of digital photo manipulation as well. You could no longer trust a photograph to be an honest record of reality.

    2. How can an AI know that it’s telling a lie? The existing implementations certainly can’t, since their “knowledge” is a mixture of fact and fiction with varying trustworthiness. Everything mankind has ever written is at least biased in one way or another and photos have been manipulated since their invention. Also non-english sources are heavily underrepresented, so today’s AIs have a distinct American/British view of the world.
      As you said, “Never underestimate the ability of humans to corrupt anything.”

      1. Surely the Chinese and French and Germans and Japanese etc. are also developing AI, and we would not hear much about it in the English speaking media but it would be trained with their data sets.
        So there is some hope yet – if you know what I mean.

        And incidentally, talking of hope or lack thereof, there are sizable communities in other languages on Reddit and Facebook and such, but if that is used to train those non-English version would it be considered equally iffy?
        I do notice an increasing adoption of the current American madness one-on-one in Europe, there is less originality and less rejection of ‘outside’ idiocy. Various region used to have their own madness, now it’s all getting to be one imported thing, even when the situation is completely different and it does not apply locally, they just ignore that. I think humans might be losing sentience (now there’s a plot twist).

      2. If I may interject, the issue you may run into is then asking what EXACTLY is defined as truth then? If our meer relaying of information or our perceived truths is not considered as truth due to our bias, then how can absolute truth be ascertained? I do think honestly though, that through the sum of all information from all perspectives truth *maybe* found and maybe AI can help us solve that, but as you said it can’t be limited to the English data sets, at that point it may need to even have a whole new language created because of the nuances of words in different languages. To address another point, we can decipher what we assume truth to be by a scientific method, we have to practice things out and see how they play out in reality, maybe through advancments in simulations may lead to that? But to the point, AI is a amazing servant but horrible master, everything AI will ever say needs to be pondered on and thought about thoroughly before considering it truthful.

      3. Wow. You’re treading DEEP into political territory now. How does ANYONE (i.e. “person”) know what is true or not with today’s penchant for conflicting pedantry and pageantry (that’s some alliteration, huh?).

      4. You can judge if something is truthful by looking at the total information that is available (including foreign texts), and see how well it fits the data.

        The existing implementions do make mistakes, but they get an awful lot right. The part they get right will only get bigger in the future.

  3. It’s a better test than Turing’s, since it at least starts to involve the real world.

    I’d bet money that most people would cheat the test, though, because they would interpret this as “can a human, following an AI’s instructions, turn $100k into $1M” – whereas at least if you read the article, it’s *much* more hands-off.

    It also cracks me up that he thinks this could work nowadays. And in like, 2 years. Good luck with that. The basic problem with the idea is that if there is an easy, slam-dunk 10x investment (no, you don’t get multiple tries) I’d bet money one of the humans along the way steals it.

    I mean, seriously: “find a manufacturer on a site like Alibaba and then sell the item” – you’re already toast at that point.

    1. >> It’s a better test than Turing’s, since it at least starts to involve the real world.

      But what if one has no desire at all to turn $100k into $1M? Does that mean that one is not sentient? Why should a chatbot even care for money?

      I know plenty of intelligent people would would be satisfied with the $100K, and would rather spend their time doing something they consider to be more productive.

      1. I didn’t say it was a *good* test. Just better than Turing’s.

        In fact, Turing’s basic idea was that asking the question “can machines think” (or in more modern parlance, are machines sentient) is meaningless, because the concept of “thinking” (or sentience) is so ill-defined anyway, and there’s nothing magic about what humans do in comprehending the world.

        The downside to Turing’s suggestion for a replacement, though, is that it presupposes that the *interactions between humans* are a good benchmark. As in, oh, if a machine can interact with a human in such a way that a human can’t tell, well, it’s good enough. When, in fact, there’s a much better benchmark available: “interactions between the human and the universe itself.”

        That’s why I said it’s at least better – because it’s not focused on human interactions, but on interacting with the world. I agree it’s still not a serious test.

    2. this challenge has a fundamental problem, because the easiest way to pass the test is if the research team just gives the ai $1M, ok well you could say but no one is allowed to help it or give it anything, so even if the ai steals the money someone would have helped it by letting himself be robbed. So if no one helps the ai, the only way to solve it, is to build a dollar printer by itself, and become the government, make the rules so that it is not considered illeagal money by anyone who is doing ai sentient research. At that point, one no longer even wan’t to know if the ai is sentient or not.

      1. Yeah, finding the appropriate guardrails to put on it is definitely an issue.

        Nominally you could modify the test to be similar to Turing’s, where you have many humans and many AIs all be given $100k and if the AIs reach $1M at the same frequency as the humans, they pass the test.

        I *gladly* volunteer for that!

  4. LLMs don’t pass the *adversarial* Turing test, and they don’t come close. It’s trivial to tell you’re talking to an LLM if you’re actually trying.

    In fact, even if you put one into a chat with somebody from the 1950s who’d never heard of LLMs or the enabling technology, it wouldn’t take that many minutes of real conversation before they concluded that if it was in fact human, it was badly brain damaged. They might begin to entertain the idea that it was an alien or something. They would definitely see something wrong with it… and if they were told that one of two conversation partners was *not* human, they’d pick the LLM.

    My personal test for “human level intelligence” is “can it walk into a strange house and clean it, using whatever supplies are available, reasonably efficiently and without doing unusual amounts of damage”.

    … but NONE of those are appropriate tests for whether something should get any particular rights or any particular level of moral consideration. You could have something that was FAR more intelligent on EVERY possible test than ANY human, and still didn’t get those. Suppose that it was intelligent but non-sentient: had no inner experience, and said as much, and could prove it. Suppose that it had no particular sense of self-interest, didn’t really care what happened to it, didn’t care to exercise whatever rights you might give it, and convincingly told you that, too.

    1. That (what you said) is the thing it seems like a lot of people miss when talking about AI. Even if something can do everything that a human can do and more, that still doesn’t give any indication that it has an “inner experience” as you put it. That there is anything behind the clockwork.

      1. How do you know that anyone else has an inner experience? This is a well know problem from philosophy. I think if something could consistently act like a person and speak of it’s internal world you have to consider that it has such an experience. Anything else is solipsism.

    2. In the 1940s, Erich Fromm said that we would eventually create machines that think like humans—but the effect would be less impressive considering we would first close the gap to the middle by making humans behave much more like machines.

      I think the guy from the 1950s who had never heard of an LLM might be better at detecting nonhuman speech than we would be.

    3. I think people get too bogged down in the details of the test. The Turing test is not meant to be a literal test, it is a thought experiment. To that end it hand-waves away the practical considerations of defining intelligence by proposing “I know it when I see it”. The point is to make the philosophical argument that if a simulation of intelligence can be completely indistinguishable from “real” intelligence, it *is* real intelligence. This is opposed to the view that intelligence depends upon some kind of internal property which is unknowable to an outside observer.

  5. I’d imagine that if you came up with a test that could tell if a machine was sentient, some people would fail it.
    Edge cases make things like this much harder than you’d imagine. When the (machine or person) is sentient, but doesn’t want to (or is unable to) act like it, what can you do?

    1. Yeah, people would not be ready for the incredibly dark unintended consequences of a 100% accurate sentience test. Such things should be left to the realm of thought experiments

  6. I would propose that the Ai should be able to run a simple business, similar to the idea in the article. A couple of months ago, I suggested a window cleaning business but the idea was not well received on social media. Then again, MOST of my posts are generally not well received so it’s hard to say.

  7. This article stated: “Solving problems isn’t enough. You need to solve problems creatively..” That’s close, but also not enough. There is a lot of randomness in “creativity”, as some it stumbled upon by serendipity (vulcanized rubber, antibiotics), but it requires “sentient’ skill to recognize the benefits.

    My Echo/Alexa devices continually annoy me with some of their answers, or even inability to answer at all, like “I’m not sure” instead of saying something “intelligent” like, “could you rephrase that?”. I have a controlled light called “bed” and it constantly tells me “bedroom can’t do that” when I ask it to “turn bed on” (bed is a controlled light, and bedroom is a group for Alexa purposes). Part of the “problem” with these input devices is that the programmers aren’t “clever” enough. The “AI” behind them seem to rely too much on probability in the “whole” instead of the “context” (i.e. in its training of random question inputs, “what is the most probable” question presented to it?). I have NEVER asked it to “turn bedroom on” (or off), so why, when it seems to not be able to parse my voice input to “turn bed on” (or off) incorrectly? It needs to devote some memory space to the things “I” ask, and not those from random sources. My habits are simple and repetitive, like when talking to a child, yet it screws up all the time. Strangely, it always correctly distinguishes between me and my brother, who lots of people say “you guys sound alike on the phone”.

    1. I once asked my Google Chromecast to find the video for “Ass Crack Bandit”. Its response (after repeating myself 4 times until it got the wording correct) was “Please don’t talk to me like that”.

      1. I just asked Alexa and she gave a strawman nonresponse, so yes, “I” would say it qualifies as an AI, and as I already stated, one that’s not very effective at answering questions it hasn’t been “‘groomed” to answer. Solutions that have been called AI or AI-like have been around for decades. While TI’s linear predictive coding is an algorithm, modern speech algorithms likely rely on AI or AI concepts.

          1. Do you think that any personal assistent, or rather assistant, is purely algorithmic, like the spell checkers that would have helped you here? As I said earlier in this post, AI has been around in many forms and competencies for decades. But their instantiation is not always clearly delineated by their creators for “competitive” reasons.

  8. I was thinking about this last night. Putting LLM’s aside, how long will it be before someone starts to train physical bots with Large Movement Models – training the AI by having it watch videos of people/animals moving.

    I guess the problem would be how to get it to apply the kinematics of movement to an arbitrary bit of hardware with very few feedback channels.

    Feels doable?

  9. Had series of conversations (in both drunk and sober conditions) with two PhD physicists, two engineers, a physician, a biologist, an electrician, a mason, and my dog (sober for all conversations) where the conclusion for each session was the we do not have a freaking clue . My cat refused to comment on the subject.

    Sentience. What is it to be self aware? Simple branching and recursive thinking? Massively parallel processing? Those are just implementations of a general algorithm. Computationally, humans are no more sentient than any other species or electro-mechanical apparatus.

    Love. Affection for another being is indeterminate as an indicator of self-awareness. It may indicate an ability to perform abstract thoughts based on the emotions inflicted per certain brain chemistries, but I never read data supporting love to indicate a higher intelligence. All animals have emotions; even myself at times.

    Quantum Mechanics. Been hearing this unsubstantiated crap for over 10 years. Some scientists have conjectured that nuclear proton spins in the brain are entangled, which supposedly indicates brain functions must be quantum. Animals think about stuff for hours at a time (mostly sex and food). Every electro-chemical process used to claim quantum brains is based on very unstable stuff. So how do we think about sex and food and motorcycles for hours and form memories if we are using meta-stable water? And to measure this crap? Our ability to detect entanglement is extremely primitive. So how do we model something we cannot measure?

    My conclusion is that most humans are not sentient. Singularities will go unnoticed. I think, therefore I will have another beer and another taco…

    1. Being “quantum” doesn’t necessarily involve all brain functions being “quantum woo” in nature. For example, neural networks can use quantum annealing (tunneling) at some small scale and behave in a deterministic fashion at a larger scale. There’s also some behavioral studies about whether human choice follows classical probability, and it was found that it resembles non-classical probability.

      We already use the stuff in everyday gadgets like flash memories, so it’s clearly not impossible to happen. We just don’t know whether it happens and to what extent.

    2. 50% to the cat: all we have is an approaching “expert system” (be along any moment 1980″, be along any moment 2000″) at best so why bother to discuss whether it is an AI when I’m far from convinced (search results would be much better if guided by a true ES)
      and 50% to Brian, I’m getting a beer…

  10. In all this talk about sentience we seem to be forgetting that (sci-fi notwithstanding) sentience and sapience are not the same thing. I’d say that it has always been clear that most living things are sentient, but few are sapient.

        1. So then is a person in a coma no longer sentient? Steven Hawking was burdened severe ALS, but managed to show that he was one of the few in that ever existed in this world to achieve so much more than most others could ever hope to achieve.

          Even bugs like gnats sense danger and “protect” themselves, which seems to be “self aware”. Sure, their DNA is imprinted with random pattern avoidance, but you don’t need avoidance if your not trying to survive. Is that simply a random trait?

          But then I don’t fully subscribe to René Descartes “I think, therefore I am” since it all depends on what “think” is. I “think” we can all agree, it’s not that simple.

          1. That’s one of the burning questions in patient care. There are ways to interact with a person in coma using brain scanners to see the response – the person can be “locked in” and still minimally conscious, while some may be completely vegetative with nothing going on in the brain.

  11. It will be an impossible question to answer. We can only hope to ascribe an educated but arbitrary cut off and accept that it will miss fringe cases and we have to be OK with that. As an example- when do humans become sentient or self aware or any other soft criteria? As a baby at some point. We assign an arbitrary age for legal reasons, typically 18 for some stuff and maybe in the US 21 for other stuff but we all can probably agree there are some 18 year olds that definitely aren’t as responsible as others. It’s a hedge and kinda works. Maybe we will accept that artificial intelligence is “good enough” for adjusting my toaster settings but will never be given legal authority to, say, pilot a passenger aircraft unsupervised.
    TL:DR we can’t agree on what it means to be intelligent human, therefore we can never do it for non-humans.

  12. AFAIK there has been AI based trading going on for quite some time now.
    And not experimentally, but as a working thing.
    Of course you can do that with systems that you would not even qualify as AI but a simple large set of rules. But even then it can do well in trading. And as you probably know, often it’s all about speed more than anything else.

  13. What a wonderful misinterpretation of the Turing test.

    What he said was the moment you couldn’t tell it would not matter whether it is was real sentience it would be good enough, as it fooled people.

    Don’t forget people were supposed be taking the test knowing that it was either an person or an AI they were talking to and they were supposed to be able to pick the right one by having an unlimited amount of questions they could ask.

    To date no AI can answer truthful about it’s emotions.

    It is easy to tell if it is an AI or a person by simply asking the right questions. Just like the Blade runner tests to see if they were people or replicates.

    I thought this site was for smart people?

    People who are so smart they are on the cutting edge of knowledge philosophy and science. People who can think for themselves without being told what they have to think.

    Am I wrong?”

  14. Honestly, does philosophy provide an answer for us “humans” what intelligence is about? Is the “willing absence of sentience” sentient? When I watch the news, then even having a moral seems not sufficient for defining intelligence. HAL’s “Dave, I can’t let you do that vs. the smart bomb from Dark Star “I have to think about it.” ;-) Why do we still try so hardly?

  15. I appreciate an article like this that encourages philosophical thought. I think it may have made some questionable jumps, though, which beg further questions. For instance, if an AI passes the turing test, does that necessarily mean that the test is broken or dead? And also there is a jump from “intelligence” to “sentience” or even “consciousness” – it could be argued that many things are intelligent (it’s in the name of AI) without being sentient or conscious. So what is the turing test for? If it only tests for the existence of intelligence, and an artificial intelligence passes the test, then the test seems to work. Do we want a test for consciousness or sentience? Probably. Or maybe test for human? Depends on what you need the test for. Is the turing test it? Doesn’t seem to be, that tests for something else.
    While the argument could certainly be made that the turing test is broken (is human-like conversation a good test of the existence of intelligence?), this is not it, nor does it seem to be the aim of this article – i think the aim is to get us to start thinking of new ways to test for human-ness, sentience, consciousness, and to recognize that the turing test was a good thought experiment but is definitely now insufficient for that moving forward.
    I hope some good ideas come from this.

  16. At 79 years of age I am trying to learn Spanish. I might be considered sentient in English but in my limited Spanish…I am sure I would fail Turing.
    I also recently had a cerebral stroke; it was very disconcerting for me to realize that “I” didn’t realize the damage my prefrontal cortex had sustained as I slid off my chair to the floor, all seemed ‘normal’ at the time. My speech was garbled to my wife but it sounded logical and coherent to me, whoever I was at the time.

  17. The Turning test was always a bad idea. Why would a computer chat fooling a person be a good judge of sentience?
    Of course, Mustafa Suleyman’s suggestion is just absurd. On the other hand, it was a great thing to say if you wanted to get some press and the particular attention of VC’s. It is possible that was his real motivation.

    1. Whether the Turing test is a good idea really depends upon the person doing the interrogating and judging. If that person is as smart as your average politician, it’s certainly a bad idea. But if that person is a really smart dude, it can be a good idea. The real problem with the Turing test is that it’s not well defined. What kind of questions would the interrogator have to ask? What would he look for in the answers to determine if the responder is a human or machine? These are left open, necessarily, since otherwise a machine could optimize for the expected questions. However, that doesn’t prevent one from laying out a style of interrogation that might lead to good determination.

    2. How else do we know YOU are sentient? Our options are interaction, observation, and projection.
      People expecting a rigid definition, or declaring shake shallow passes mean it’s over, are completely missing the point. It’s a philosophical position. Getting fooled by answering machine messages that go “hello? … ha ha, gotcha” obviously doesn’t matter. The question is whether you could ever tell the difference – and whether you could ever appear different, to us.

  18. So hung up on the human condition, comparing homo sapiens to machines and other organic life is so 20th century. Just be.

    Still waiting for my $100k to make 1 Million.

  19. If you cannot tell the difference, does it matter?
    No. But the assumption takes for granted that you can never tell the difference.

    We don’t put much stock in an individual’s perception for mundane things, if you have 5 witnesses you get 5 different accounts. And, when an individual makes incredible claims, we look for clues that they are mistaken or that their perception is erroneous. And, while consensus has the feel of a much better test, look at how many humans cannot be convinced of the accelerating instability of the global climate.

    The fact is, humans suck. We are easily fooled, easily distracted and are frequently inspired to act against our own self interest (by others & even ourselves)

    Instead of measuring intelligence by comparing individuals to the highest accomplishments of our gifted. Lets also consider how individuals fall short (or even lower the bar) of all the things we suck at.

    Any entity who can demonstratrably prove they suck less than humans (i.e. they had a choice and made a good one a statistically significant number of times) should be considered a sentient agent at parity with humans.

    Let’s do a test like this for all the ways that we suck. Hopefully by the time we can tell AI is different, it WILL matter.

  20. Run screaming from this paperclip maximizer.

    What maniac thought capitalism wasn’t amoral, mechanized, and greedy enough? Which walking colony of brain worms saw the transformative potential of AI and could only imagine “line go up?”

  21. How do we know that one another are sentient/intelligent/people or whatever other term you want to use for it? I have sort of an indescribable sense of my own ‘self’. I assume everyone else does to but how can I know that? How can I know that you are not just the output of a bunch of neurons processing input in such a way as to give the appearance of a sentient being without actually being one?

    We assume it because we see it in ourselves, we see the similarity between ourselves and others. Or maybe we each just want everyone else to give our own personhood the benefit of the doubt and it’s easiest to expect what we also give.

    I’m not saying ChatGPT or any of these others might actually be sentient. Nor am I saying that any humans are not. I’m only saying that as more complex forms of AI come to be I’m not sure we will even be capable of fully defining the question let alone answering it.

  22. Thinking the Turing Test is limited to chatbots is an extraordinary oversight, and gross oversimplification of what Turing was going for.

    The essence of the Turing Test is *BEHAVIOUR*:
    -if you are unable to distinguish the *BEHAVIOUR* of the AI from the *BEHAVIOUR* of a human
    -then we MUST say the AI is intelligent because we know the human is

    The only reason the “canonical” Turing Test takes the form of a text chat, is because that was the only possible medium in Turing’s Time where the nature of the machine would not be immediately obvious to the judge.

    Limiting the Turing Test to chatbots, or claiming it’s defunct because chatbots have had limited successes, when we have many alternative mediums to use for the test, is pretty stupid.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.