“Once, men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” — so said [Frank Herbert] in his magnum opus, Dune, or rather in the OC Bible that made up part of the book’s rich worldbuilding. A recent study demonstrating “cognitive surrender” in large language model (LLM) users, as reported in Ars Technica, is going to add more fuel to that Butlerian fire.
Cognitive surrender is, in short, exactly what [Herbert] was warning of: giving over your thinking to machines. In the study, people were asked a series of questions, and — except for the necessary “brain-only” control group — given access to a rigged LLM to help them answer. It was rigged in that it would give wrong answers 50% of the time, which while higher than most LLMs, only a difference in degree, not in kind. Hallucination is unavoidable; here it was just made controllably frequent for the sake of the study.
The hallucinations in the study were errors that the participants should have been able to see through, if they’d thought about the answers. Eighty percent of the time, they did not. That is to say: presented with an obviously wrong answer from the machine, only in 20% of cases did the participants bother to question it. The remainder were experiencing what the researchers dubbed “cognitive surrender”: they turned their thinking over to the machines. There’s a lot more meat to this than we can summarize here, of course, but the whole paper is available free for your perusal.
Giving over thinking to machines is nothing new, of course; it’s probably been a couple decades since the first person drove into a lake on faulty GPS directions, for example. One might even argue that since LLMs are correct much more than 50% of the time, it is statistically wise to listen to them. In that case, however, one might be encouraged to read Dune.
Thanks to [Monika] for the tip!

When I was in school we were not allowed to use calculators because we needed to understand how to do the work to get to the correct answer. When you don’t know how to do the work to get to the correct answer your are enslaved to the machine. Also, if you don’t know how to get to the correct answer on your own, you can’t fact check any answer the machine is giving you!
As an undiagnosed “neurodiverse” child in the early 90s, this was such a stupid decision and only resulted in causing me plenty of unnecessary suffering (both physical and mental) from my clueless parents who would scream at me and then beat me until I could only cry and shiver.
Why? It was all for the fact that I was not interested in spending a beautiful, warm, sunny Saturday afternoon mindlessly grinding 5 or 10 pages of similar (and totally boring)
2094 + 93505 = ?type of mandatory homework excercises from my math textbook when:a $5 electronic calculator can do this in minutes
there was a world outside to explore (or just play with my friends).
This of course led to poor math grades in primary school.
My mind would pick up those concepts easily but just a few years later when I grew up. When my parents beat me up for not doing homework (math and others), they screamed visions of how I will end up sweeping the gutters if I don’t spend all my free time studying – which at the time was mostly done by rote learning.
Yet I still didn’t give a damn and only did bare minimum to pass through this stupid system.
Despite their secret wishes of a working class pleb who was only used to do as told and not to question anything, not only did I not end up sweeping the streets, I was actually the first person in my family line to finish the university (masters degree) and even make a career in control engineering.
And for my family (or more like ex-family as I cut all my ties to them) the prospect of going to the university and then finding a really well paying job by pure merit was something unimaginable.
For generations, their only concepts of earning money were either: minimal wage labor in local companies, obtaining a promotion through nepotism or going to Germany, France or the Netherlands to pick up potatoes, tubers and other vegetables.
And to be honest, as someone who ended up going the minimum-wage route after getting top grades in school, it’s still way better than school ever was. Public schooling is just prison-lite, and it’s an atrocity that we force children to waste their childhood there. You spend twelve years (minimum!) getting tormented by the dregs of society that you’re forced to “socialize” with (while unintentionally copying their bad mannerisms), and end up learning far less than you would with an equivalent amount of time spent in front of wikipedia. Total waste of time.
The education system is made by neurotypicals, for neurotypicals. That may be the majority of the population, but not for all of us.
Don’t worry, I’m an accomplished electrical engineer now. Glad to hear you made it, despite your parents.
Not in my school! Most of the teachers were nutters (too).
Really can’t agree, yes some degree of this homework type crap was entirely pointless, but as HOME is in the name its not like they can’t actually stop you using the tool if you want to, and for something as simple and tedious as adding two numbers together there is no working out to do either!
While by banning the calculator that in the 90’s would do it all for you, even spit out nice neat and cancelled down whole fractions, often able to solve quadratics (simple ones anyway) etc you are making sure the person actually understands the rather more complex algebra. And that does have working to show and is valuable to really understanding how mathematics works, and gives you the toolkit to approach that problem that isn’t trivial for the calculator to solve.
It does sound like you had a pretty poor school experience, and rote learning isn’t something I get on with either – but the calculator being allowed or not has nothing to do with rote learning. Demonstrating and practising the thinking and logic required to solve the problem while still potentially repetitive is far more engaging and generally works better without the calculator and other similar assistive tools that might just let you skip steps so you don’t develop that understanding..
Calculators are really a poor analogy here because they always work. There is just no point in questioning or checking your calculator’s answer with pencil and paper; if you get a different answer it is 100% of the time going to be because you made a human error. There’s no subtle nuance in the digit sequence it instantly spits out that a human could have somehow expressed better. Nor any pleasure to be taken in manually applying the same rote algorithm to one problem after another when the only thing you can do differently is maybe slip up and get the wrong answer. Sure, it’s nice to know how to do manual calculations for the times when there’s no calculator available. But if you did have a calculator on hand there would be absolutely no reason not to use it instead.
Idiots (which it should now be obvious make up at least 80% of the human population) see ‘AI’ as an Everything Calculator that instantly spits out the correct C# function or the correct limerick or the correct cover letter or the correct animated scene of an anthropomorphic cat accidentally baking her daughter in the oven- and if it looks wrong you’re probably just not prompting it right. They believe that if there is an LLM available, like the calculator- there’s no reason not to trust it over your brain. And in their own particular cases, they may right. Because their brains are not very good.
You’ve never graded a math class, I take it.
If you’re looking for, say, the mass of a bowling ball in a word problem and your answer is 6 million, you don’t need to double check by hand. You just need to think about the problem.
Not thinking about the problem got a lot worse when we started letting kids use calculators.
Same, and I agree–although you have to admit the admonishment that “you won’t always have a calculator in your pocket” turned out pretty funny in hindsight.
I normally do not nitpick articles but it is worth doing so here. LLMs are not correct more than 50% of the time (in all topic areas). A lot of LLMs are lucky to maintain 50% accuracy on subjects they are heavily trained on. Especially over time.
I have stumbled on areas where correctness hovers on 5% being generous. Of course there are areas, settings, and model versions where this may be significantly higher, but that is part of the issue. Similarly there are cases where LLMs have a 0% chance at giving a factual answer. It is a slot/slop machine because you don’t know unless you already knew. There are some sincere issues in how these tools are evaluated, the evaluations are often gamed.
People say this is the same as discussing with people. It is not. We have been tuned through many series of evolution to spot deception, blowhards, and whatever other personality traits may exist. People have reputations in subject matter areas, LLMs just claim to know everything, just type your inquiry right in.
Experts can forgive and fix small errors and go “oh wow it only missed this minor thing!” While someone who is not an expert could spend a year worth of study to really understand the issue or why it truly mattered even if it was pointed out to them.
When I first started testing LLMs for their risk profile people would say things like, “did you try this random seed? What about this prompt?”. Now it’s “you didn’t use this model! Duh you dummy. This one is AGI are you a decel? Lawl”. The lack of rationality around how these models work at all is shocking.
So I do not give up my thinking to a machine. I will give up some boiler plate work, or boring work in a subject matter I know well. Old reference books have become so valuable to me over the past 2 years.
The Butlerian Jihad is coming.
We will need spice to get through it.
You’re willing to sacrifice yourself if it doesn’t work out?
Quote from Stephen Hawking, one of the most intelligent humans in the last 100 years:
“… The development of full artificial intelligence could spell the end of the human race ..”
( https://www.bbc.com/news/technology-30290540)
Ironically, it seems that the threat seems to be developing through
humans getting dumber rather than LLMs getting smarter (!)
We seem to be heading towards the film “Idiocracy”…
^ this !! ^
Ever see a typical interview of a spring breaker ? (ignoring the issue of “student loan forgiveness”).
Typical exchanges:
Q. Who fought in the American Civil War ?
A. uhhh, Japan and Mexico ??
Q. Who is the Vice President ?
A. Kamala ?
Q. How many states are there in the United States of America ?
A. uhhhh…… ????
We are doomed !!
Heading? We have been in it for 2 years by now.
That kind of thinking was present in the League of Worlds long before the OC Bible was established, which was several generations into Imperial rule, and even prior to the Butlerian Jihad. We’re talking a few thousand years. Pretty much since the beginning of the cymek Titans’ conquest(about 3000BBJ, in turn roughly 4000AD/CE). It was only after this period; after the Jihad; with the extremist machine pogrom zealots when it was codified by a weak emperor to save his own rule. And they were extreme – even kitchen appliances didn’t escape.
Back to contemporary things, for a few decades now, the Internet has been like an auxiliary module for our brains. We recall facts, perform calculations, communicate – all through the Internet. All this new stuff just amplifies that.
In Gen X lingo: Well, Duh.
Everyone: maybe dont rely on the lying machine tbat flatters you
HAD editors circa a year ago: but i love it!
HAD editors now: omg its awful you guys nobody could have forseen this
pathetic.
i’m all for criticizing HaD but my experience of this website’s articles on AI has been that they have not consistently weighed in on either “side” of this issue. They portray the ambiguity or ambivalence within themselves, and then from one day to the next they explore different pluses and minuses.
Yes, it’s definitely good to be ambivalent about the lying-machine.
Are you kidding me? They literally wouldn’t shut up about vibe coding, reference things like hueforge and got caught using ai generated images in their articles.
Knock it off.
Maybe there was a gap in my subscription, but I’ve felt HaD editors have been fairly consistently rational, skeptical, and even slightly piss-takey about LLMs.
There’s no official HaD editorial stance on this–or much of anything, really–so it’s just however any given author is feeling when they pick up a keyboard. We’re an eclectic bunch so it’s no shocker you’ll see differing opinions.
Humans have been trained for centuries to surrender thinking to authorities, often false ones. The internet’s free flow of information could be a great weapon of democracy but we’d need to apply critical reasoning, choose whom to trust and be able to recognise at least obvious manipulation and lies. And, I suppose, an important lesson we need to teach kids is not to believe without proof. The AI could be the thing that will make this more obvious because from the get go they have been exposed as something not to be trusted. There will be a lot of money poured into reversing this image (and the people behind AI have tons of it) but we need to fight back… And there’s more of us.
I would hope not! Democracy is just surrendering to the authority of the masses.
Isn’t that what AI is too?
Still better though than surrendering to journalist, who throw themselves up as authorities on many subject including morality, which is a role they are completely and utterly unsuited to fill.
And yet we let them get away with all too often and pretend they have answers and authority.
The free flow of information is a poisoned well at this point. Picking the molecules of truth out of the flow of of lies, bullshit , and advertising is not an easy thing to do.
Truth has always been a competition. You’re just seeing it in live action now. Fundamentally, truth matters because it works; if people believe wrong things, they will eventually suffer the consequences. If they don’t, then the truth wasn’t that important to begin with in that context.
Tyler, I can’t believe you made this post! Seriously, I just asked Claude if we were and it said we aren’t. Think before you post!. ;)
Dang, got me there. Wish I’d thought to ask an LLM to think of that!
Call me a Luddite, because I’m smashing the machines and choosing freedom and independent thought over digital enslavement. I will never depend on a BS generator to tell me the “truth” and will continue to think using my own brain, flawed though it may be. This bubble needs to hurry up and burst already.
the overconfident salesmen that sells you it’s hallucinations is not the full potential. You can write useful code with llm’s
And luddites did not fight automation, they fought for better labour environments.
How did you write your comment with smashed machine?
I respect this orientation but it’s a mistake to think that your thinking is any more free than anyone else’s. There’s an immense value to individuals and to the collective to have people working within different cognitive cages! But your “independent thought” is just a different flavor of enslavement. In my opinion. Anyways, my independent thought certainly is!
I had a system in place that filled a tank with water, and supossedly stopped when full. The power company somehow figured out that if they interrupt the power very briefly they can reset just SOME of my devices, the arduino on the wall wart survives the power blip but the water level sensor got reset to zero. So I have a healthy level of distrust, it’s more likely that some rando on facebook marketplace does what he says he’ll do.
I think a more appropriate name would be “Murphy” rather than “Claude”, because if something can go wrong….
You think the power company is interrupting power to you, and only you, to mess up your water tank?
This really sounds like a problem with your water level sensor, not with your power company… maybe put a capacitor on your DC power supply to ride out brief blackouts or get a better sensor.
Or look inside your toilet tank.
I’m not sure Dune is the best example for this. I mean, sure, on the surface it looks that way but…
By the end of the series doesn’t it turn out that the Butlerian Jihad and several millennia of human history that came after it were all engineered by the machines in the first place? Wasn’t it all to get the human race developing itself again?
The whole thing is basically a success story of turning over thinking to machines. All-be-it one where the humans ironically think they are doing the opposite the whole time.
No, it was all planned by Paul, not the machines. Paul saw the future, and knew that the only way to PREVENT the machines from retaking humanity, was to first create a leader so alien and unknowable that humanity would be crushed into such a state where their very genetic memory carried by future generations would force them to reject any kind of leadership, and second, a ban on space travel must be enforced for something like thousands of years. This was to ensure that once humanity was again free to travel the universe, they would start a diaspora so large that even if 99% of humanity was wiped out, they would be spread so far among the stars that pockets would be guaranteed to survive, even if only by virtue of being beyond the light cone created by any other living beings thanks to expansion of the universe.
ugh, posthumous sequels. For all of Frank Herbert’s flaws (and Kevin J Anderson’s strengths), i think it’s unfair to characterize Dune’s message based on Anderson’s work
Sorry, I didn’t read the fanfic you’re referencing.
(If it wasn’t Frank, it’s fanfic to me.)
“Aristotle taught that the brain exists merely to cool the blood and is not involved in the process of thinking. This is true only of certain persons.”
—Will Cuppy
“Humans share ninety-eight percent of their DNA with the common banana, an obvious characteristic of some of my colleagues more than others.”</i [paraphrase]
—Martin Rees, Astronomer Royal, Great Britain
I’ll ask Grok.
Yeah that’s what i was going to say too. People are pretty bad at evaluating thinking among themselves and their peers. Any time someone tries to ask a question about this sort of thing, they start with an untenable assumption. Someone who asks “where did civilization come from” always starts out with the untenable assumption “I am civilized.” Someone who asks “where did thinking come from” always starts with the untenable assumption “I am thoughtful.” And so on. People aren’t really prepared to confront our actual existence, which is closer to a pattern matching generational robot operating in a social context than to anything resembling “thinking” or even “self”.
It’s more of a genZ problem and lack of critical thinking.
I’ve seen Zoomers who are “Ted-pilled” and ready to launch the Butlerian Jihad and Boomers who will ask ChatGPT how to wipe their bottoms; I don’t think reducing this to generational terms makes much sense.
The “rise of cognitive surrender” starts in school when a student accepts, without question, what is being taught in history, social studies or even chemistry textbooks, and by the teacher.
Many decades ago, my high school chemistry textbook declared inert gases do not form compounds. A few years later I visited a laboratory and saw a bottle containing crystals of a xenon compound.
In the late 19th century, some mathematics professors had cognitive problems with Georg Cantor’s novel claim that there are different sizes of infinite sets.
Once you have been taught about paradigms and axioms this type of issue goes away.
Yes I was taught about Newtonian mechanics at the age of 12. And I still use it, it’s a reasonable approximation, even though I know there are more complete methods.
That Xenon could form compounds was discovered in 1962. Your high-school textbook from many decades ago was correct. Later textbooks will show the new information and are also correct. I suppose science textbooks are much like science itself. Who would have thought?
Doomsaying and hand-wringing. Humanity will throw away their freedoms and intellect ASAP whether it’s blindly following a GPS’ line into a lake or a LLM’s piss-poor advice (what models have y’all tried? I see a lot of ‘I tried but it’s 90% bull**** every time’ but no model names being thrown out or prompts used).
Take a step back and realize that LLMs are just another tool in the AI toolbox like MLP and LSTM, not every one is ‘unethically trained’, and is no more inherently evil than any other thing that made the sheep line up.
The real hackers are going to keep their wits about them while everyone else lines up for the fire.
A few months ago I asked ChatGPT to find whether there was quote on a specific topic in any of the writings of a specific historical person. ChatGPT gave me a quote (in quotes) and the source (a book). A search of the book found no such quote. When confronted, ChatGPT responded that it was “sorry” and admitted that the quote was made up but represented the general views of the specific person. AI can lie.
We argue that the goal of AI is to ‘lie’ convincingly. Any usefulness or accuracy is merely a side benefit.
could argue ^
The assumption is that people think in the first place. Going by what I see that’s a false assumption.
Why did one post about the Butlerian Jihad get deleted?
I think gee-had is a no no word here.
I think it’s more of a wood problem, as in someone has.