Why AI Usage May Degrade Human Cognition And Blunt Critical Thinking Skills

Any statement regarding the potential benefits and/or hazards of AI tends to be automatically very divisive and controversial as the world tries to figure out what the technology means to them, and how to make the most money off it in the process. Either meaning Artificial Inference or Artificial Intelligence depending on who you ask, AI has seen itself used mostly as a way to ‘assist’ people. Whether in the form of a chat client to answer casual questions, or to generate articles, images and code, its proponents claim that it’ll make workers more efficient and remove tedium.

In a recent paper published by researchers at Microsoft and Carnegie Mellon University (CMU) the findings from a survey are however that the effect is mostly negative. The general conclusion is that by forcing people to rely on external tools for basic tasks, they become less capable and prepared of doing such things themselves, should the need arise. A related example is provided by Emanuel Maiberg in his commentary on this study when he notes how simple things like memorizing phone numbers and routes within a city are deemed irrelevant, but what if you end up without a working smartphone?

Does so-called generative AI (GAI) turn workers into monkeys who mindlessly regurgitate whatever falls out of the Magic Machine, or is there true potential for removing tedium and increasing productivity?

The Survey

In this survey, 319 knowledge workers were asked about how they use GAI in their job and how they perceive GAI usage. They were asked how they evaluate the output from tools like ChatGPT and DALL-E, as well as how confident they were about completing these same tasks without GAI. Specifically there were two research questions:

  1. When and how do knowledge workers know that they are performing critical thinking when using GAI?
  2. When and why do they perceive increased/decreased need for critical thinking due to GAI?

Obviously, the main thing to define here is the term ‘critical thinking‘. In the survey’s context of creating products like code, marketing material and similar that has to be assessed for correctness and applicability (i.e. meeting the requirements), critical thinking mostly means reading the GAI-produced text, analyzing a generated image and testing generated code for correctness prior to signing off on it.

The first research question was often answered by those partaking in a way that suggests that critical thought was inversely correlated with how trivial the task was thought to be, and directly correlated to the potential negative repercussions of flaws. Another potential issue appeared here where some participants indicated accepting GAI responses which were outside that person’s domain knowledge, yet often lacking the means or motivation to verify claims.

The second question got a bit more of a diverse response, mostly depending on the kind of usage scenario. Although many participants indicated a reduced need for critical thinking, it was generally noted that GAI responses cannot be trusted and have to be verified, edited and often adjusted with more queries to the GAI system.

Distribution of perceived effort when using a GAI tool. (Credit: Hao-Ping Lee et al., 2025)
Distribution of perceived effort when using a GAI tool. (Credit: Hao-Ping Lee et al., 2025)

Of note is that this is about the participant’s perception, not about any objective measure of efficiency or accuracy. An important factor the study authors identify is that of self-confidence, with less self-confidence resulting in the person relying more on the GAI to be correct. Considering that text generated by a GAI is well-known to do the LLM equivalent of begging the question, alongside a healthy dose of bull excrement disguised as forceful confidence and bluster, this is not a good combination.

It is this reduced self-confidence and corresponding increase in trust in the AI that also reduces critical thinking. Effectively, the less the workers know about the topic, and/or the less they care about verifying the GAI tool output, the worse the outcome is likely to be. On top of this comes that the use of GAI tools tends to shift the worker’s activity from information gathering to information verification, as well as from problem-solving to AI-output integration. Effectively the knowledge worker thus becomes more of a GAI quality assurance worker.

Essentially Automation

Baltic Aviation Academy Airbus B737 Full Flight Simulator (FFS) in Vilnius (Credit: Baltic Aviation Academy)
Baltic Aviation Academy Airbus B737 Full Flight Simulator (FFS) in Vilnius (Credit: Baltic Aviation Academy)

The thing about GAI and its potential impacts on the human workforce is that these concerns are not nearly as new as some may think it is. In the field of commercial aviation, for example, there has been a strong push for many decades now to increase the level of automation. Over this timespan we have seen airplanes change from purely manual flying to today’s glass cockpits, with autopilots, integrated checklists and the ability to land autonomously if given an ILS beacon to lock onto.

While this managed to shrink the required crew to fly an airplane by dropping positions such as the flight engineer, it changed the task load of the pilots from actively flying the airplane to monitoring the autopilot for most of the flight. The disastrous outcome of this arrangement became clear in June of 2009 when Air France Flight 447 (AF447) suffered blocked pitot tubes due to ice formation while over the Atlantic Ocean. When the autopilot subsequently disconnected, the airplane was in a stable configuration, yet within a few minutes the pilot flying had managed to put the airplane into a fatal stall.

Ultimately the AF447 accident report concluded that the crew had not been properly trained to deal with a situation like this, leading to them not identifying the root cause (i.e. blocked pitot tubes) and making inappropriate control inputs. Along with the poor training, issues such as the misleading stopping and restarting of the stall alarm and unclear indication of inconsistent airspeed readings (due to the pitot tubes) helped to turn an opportunity for clear, critical thinking into complete chaos and bewilderment.

The bitter lesson from AF447 was that as good as automation can be, as long as you have a human in the loop, you should always train that human to be ready to replace said automation when it (inevitably) fails. While not all situations are as critical as flying a commercial airliner, the same warnings about preparedness and complacency apply in any situation where automation of any type is added.

Not Intelligence

A nice way to summarize GAI is perhaps that they’re complex tools that can be very useful but at the same time are dumber than a brick. Since these are based around probability models which essentially extrapolate from the input query, there is no reasoning or understanding involved. The intelligence bit is the one ingredient that still has to be provided by the human intelligence that sits in front of the computer. Whether it’s analyzing a generated image to see that it does in fact show the requested things, criticizing a generated text for style and accuracy, or scrutinizing generated code for accuracy and lack of bugs, these are purely human tasks without substitution.

We have seen in the past few years how relying on GAI tends to get into trouble, ranging from lawyers who don’t bother to validate (fake) cited cases in a generated legal text, to programmers who end up with 41% more bugs courtesy of generated code. Of course in the latter case we have seen enough criticisms of e.g. Microsoft’s GitHub Copilot back when it first launched to be anything but surprised.

In this context this recent survey isn’t too surprising. Although GAI tools are just that, like any tool you have to properly understand them to use them safely. Since we know at this point that accuracy isn’t their strong suit, that chat bots like ChatGPT in particular have been programmed to be pleasant and eager to please at the cost of their (already low) accuracy, and that GAI generated images tend to be full of (hilarious) glitches, the one conclusion one should not draw here is that it is fine to rely on GAI.

Before ChatGPT and kin, we programmers would use forums and sites like StackOverflow to copy code snippets for. This was a habit which would introduce many fledging programmers to the old adage of ‘trust, but verify’. If you cannot blindly trust a bit of complicated looking code pilfered from StackOverflow or GitHub, why would you roll with whatever ChatGPT or GitHub Copilot churns out?

94 thoughts on “Why AI Usage May Degrade Human Cognition And Blunt Critical Thinking Skills

  1. The general conclusion is that by forcing people to rely on external tools for basic tasks, they become less capable and prepared of doing such things themselves, should the need arise.

    No shit! A Nobel prize for them ASAP.

      1. Something we’ve (re-)learned from self driving cars is that humans absolutely suck at remaining attentive when supervising a process without being actively engaged. When a self driving car asks you keep ready to take control, few-to-zero people are actually capable of maintaining attentiveness over long stretches of time to actually take control effectively in an emergency, or spot one that’s about to happen.

        Similar things occur when some AI model is doing all the work of generating output, and some human is asked to just read through it and check if it makes sense. 100x the problem with code instead of writing, where there’s more to it than surface-level aesthetic appearance.

    1. Given the growing polarization and “equivalency of argument” spawned by social media in public discourse, coupled with a burgeoning absence of critical thought, AI, with its current relatively immature training databases and algorithms of questionable parentage and transparency, will merely serve to amplify whatever shouts loudest. A novel idea generated by AI is more likely an association we’ve missed in the data from the past, rather than an original thought. We’ve already seen how AI can generate a torrent of blatant falsehoods to completely overwhelm reality. Current popularity of AI in business is primarily driven by “the bottom line” with a goal of eliminating costs associated with expensive and messy human beings. Unchecked and without honest and ethical policies and people in the loop to limit what authority it is given over the things that affect our lives, AI will ultimately increase cognitive laziness and more easily enable us to be told what and how to think by those who may have agendas that will benefit from manipulating others.

      “If you make people think they’re thinking, they’ll love you; but if you really make them think, they’ll hate you.” — Don Marquis

      1. …Funding for “education” which has consistently produced the highest illiteracy and innumeracy rates in the first world, and gets even worse as it demands more and more money. People aren’t falling for it anymore when you simply describe the department’s stated purpose and expect them to take it at face value.

        All legislation has been described as “The Prevent Puppies from Getting Squished by a Steamroller Act” for decades, and then if you look underneath the rock it’s a law for launching puppies out of cannons into a big brick wall. People aren’t buying it anymore.

        We’ve successfully replicated the cynicism and instinctive disbelief which followed Pravda. And the people who did it will never accept responsibility, they’ll just keep doubling down and blaming those who no longer believe them. It’s shocking how similar it is…

        1. Since you broached the subject, I’m going to say it: please don’t fall for the perpetual crisis narrative about U.S. education. The crisis narrative is used to drive the market for new curriculums and teacher training programs from politically connected companies.

          If you’re actually interested in learning more about how that works, Elena Aydarova explains it here:
          https://youtu.be/jDmy6R0WSvs?feature=shared
          and here:
          https://www.journals.uchicago.edu/doi/10.1086/730991

          For one, the U.S. is a large, diverse country that tries to educate all comers, so they are never going to be at the top of the PISA rankings. Many of the countries that outrank us are either small and homogenous, or they only try to educate a privileged few.

          Also, if you look at the PISA data from 2022, the U.S. scored a little higher than the average in science and reading, and a tad lower than the average in math. Not great, but no crisis.
          https://www.factcheck.org/2025/02/trump-wrong-about-u-s-rank-in-education-spending-and-outcomes/?utm_source=twitter&utm_medium=social&utm_campaign=social-pug

          And when you look at the NAEP data (aka “the Nation’s report card”), you should understand that “basic” represents grade level and a score of “proficient” is above grade level. So the headlines usually say something to the effect of “only 30-something percent of students are scoring ‘proficient’ or above on the NAEP,” when what they really should be saying is “the test shows that roughly 70-something percent of students are scoring grade level,” which isn’t great, but also isn’t a crisis.

          All this to say: support your local public school, please.

          1. Divide results by demographic. We are not far behind Czechia, and not far ahead of Eritrea.. If you know how to look.

            And never support your local public school. It’s a meat-grinder for children. School is a way to reduce the great and uphold the hideous. Annihilate your local teacher.

        2. Since you broached the subject, I’m going to say it: please don’t fall for the perpetual crisis narrative about U.S. education. The crisis narrative is used to drive the market for new curriculums and teacher training programs from politically connected companies.

          If you’re actually interested in learning more about how that works, Elena Aydarova explains it here:
          https://youtu.be/jDmy6R0WSvs?feature=shared
          and here:
          https://www.journals.uchicago.edu/doi/10.1086/730991

          For one, the U.S. is a large, diverse country that tries to educate all comers, so they are never going to be at the top of the PISA rankings. Many of the countries that outrank us are either small and homogenous, or they only try to educate a privileged few.

          Also, if you look at the PISA data from 2022, the U.S. scored a little higher than the average in science and reading, and a tad lower than the average in math. Not great, but no crisis.
          https://www.factcheck.org/2025/02/trump-wrong-about-u-s-rank-in-education-spending-and-outcomes/?utm_source=twitter&utm_medium=social&utm_campaign=social-pug

          And when you look at the NAEP data (aka “the Nation’s report card”), you should understand that “basic” represents grade level and a score of “proficient” is above grade level. So the headlines usually say something to the effect of “only 30-something percent of students are scoring ‘proficient’ or above on the NAEP,” when what they really should be saying is “the test shows that roughly 70-something percent of students are scoring grade level,” which isn’t great, but also isn’t a crisis.

          All this to say: support your local public school, please.

        3. It seems like Akismet is killing my comment, which includes evidence for these statements, so I’ll try something briefer.

          The “produced the highest illiteracy and innumeracy rates in the first world” statement is wrong. The 2022 PISA shows that the U.S. has above-average reading and science scores, and just slightly below average math scores. And the NAEP shows that roughly 70 percent of students score on grade level.

          The problem is that the crisis narrative serves to drive the marketing for curriculums and teacher training.

  2. The car, PLUS: In the past, people typically were born, lived, and died within a 5-mile radius, but now they can travel hundreds of miles in mere hours, crossing continental land masses in days.

    The car, MINUS: People now get in fist fights over the parking spaces near a store entrance, because they are too withered and lazy to walk across a parking lot.

    The calculator, PLUS: The pocket calculator lets you quickly complete complex math problems that, in the past, would have taken hours or even days and would have been prone to error.

    The calculator: MINUS: Laziness has made the pocket calculator is so indispensable that most people can no longer do long division, and many no longer have any recollection of their basic times tables. Calculating a waiter’s tip may as well be fusion research.

    Advanced AI, PLUS: …. blah blah blah [insert pie-in-the-sky promises here]

    Advanced IA, MINUS:… Helps to create a civilization of “advanced” humans who can no longer breath, feed, or reproduce without consulting a phone app tied to the AI mothership.

    Bottom line: Use it or lose it. That applies to muscle, bone…and mind.

    1. Yeah, I’m in the can’t do long division any more camp. I’d need to look it up. I haven’t needed to do it since primary school, so they knowledge has been parked in the don’t need, don’t care parking lot in my brain.

      I know! I’ll get a llm to write me a program to do long division for me!

      1. Long division is just fixed point arithmetic where you make the divisor or the dividend larger by a factor of 10,100,1000… to find the closest fit, leaving the remainder for the next step.

        In this example, the numbers in brackets are used for tallying how many zeroes you’ve added or removed:

        If you have say, 12987 / 34 you see how many times you can fit 34(000) into 12987 which is 0, You remove one zero, then calculate 12987 / 34(00) = 3 with a remainder of 2787. Remove a zero again, divide 2787 / 34(0) = 8 leaving 67. Then 67 / 34 = 1 leaving 33. The number of zeroes added shows the position left of the 1’s place, so the first result goes in the 1000’s places, the second in the 100’s places, the third in the 10’s places and the last in the 1’s places.

        Then you start increasing the dividend to get the decimals: 33(0) / 34 = 9 with a remainder of 2(4) so you go 2(40) / 34 = 7 etc.. Adding zeroes to the dividend moves your decimal place to the right in the same manner.

        So the result is 0 3 8 1 .9 7 …. etc.

        This is doable but difficult as described, so the long division algorithm deals with the adding and removing zeroes, and dealing with the remainder likewise, by physically shifting the calculation left and right on the paper for the same effect. It’s like the mechanical computer seen in an earlier HaD article.

        Once you understand it, it becomes possible to reconstruct the algorithm in case you’ve forgotten it. There are many more or less intuitive ways to describe it, and the problem with education is that most teachers don’t understand what they’re doing so they can’t tell you how it works – or they don’t trust that you can understand it. Some people can’t. So, they just teach you the algorithm, which is easy to forget and you have to really drill it in. Bad explanations combined with little to no repetition results in no learning.

        In summary, if you know what you’re doing, you can always re-invent the algorithm to do it. But if you don’t know what you’re doing, you have no hope.

        1. still do long division, but what is stark, wierd, and funny, is just handing exact change to a checkout worker, and seeing it litteraly blow there minds, as it clear that as adults, they have never seen or experienced any real world application of the simplest math.
          I have gone back to cash for 99% of day to day stuff, and it’s physicality has a knock on effect , in that frivilous purchases are much less likely, monkey brain thinking…,,monkey gets its plastic card BACK and the thing!, but is reluctant to loose any paper and coin for junk, totaly autonomic

          1. In fairness to the cashier it would blow my mind too – not because I can’t do the math but that you actually had the right change… Every time I use cash I’m short plenty of shrapnel, or have lots but just don’t have that 20pence or enough of the right smaller coins to make up for its absence and still fill out the 68 pence remainder…

    2. Well, nothing to disagree here.

      Some thoughts I would like to add after yesterday’s dinner with some friends, where we discussed this among other topics.

      If you look at Kant’s definition of enlightenment, he talks about “mankind’s exit from self-incurred immaturity” and the “courage to use your own reasoning”. If you’re lazy and comfortable being told what to think and do, you were lost in the 20th century as much as you are lost today. The ability and strength to develop your own ideas and question them is one of the key differentiators.

      Descartes’ method of doubt not only comes with cogito ergo sum but also implies dubito ergo sum (I doubt therefore I am). Applied to the usage of calculators, you understand why math teachers forced us to check accounts and not just go with the result. Similarly, with “information”, “news” and opinions – just because it’s written doesn’t say it is correct.

      Personally, I don’t buy all the noise of AI being the great destroyer of our civilisation. It will be definitely not helping those, who refuse to question what information they have been given, who don’t try to put it into a model of the world that, in itself, makes sense and can be backed up by facts. But then, these people going with what everyone does just because of it were always there, be it with pitchforks hunting witches, wearing uniforms marching or queueing for hours to get the latest iphone.

      I like Kant’s sapere aude (dare to know) and think, this is one of the reasons, I come here to read many of these hackaday articles that make me push my boundaries of what I understand.

      Equally important is the ability to communicate and dissent, to throw ideas of understanding into the collider and play ball with people around you to deepen your understanding.

    1. yep
      and math scores are down, no even one knows what a slide rule is anymore, let alone how to use it.
      YOLO is in the dictionary, kids cant even speak without sounding like a drunken text
      wikipedia is even more full of half facts and made up BS that it ever was, and kids still treat it like encyclopedia britanica
      and internet searching now starts off with an AI generated “fact blurb” that is rarely without factual errors.

      So THEY were and are correct.

      We are living in the prequel to both Idocracy, and The Outer limit’s “stream of consciousness”
      Wish humanity luck, We really need it.

      1. I could easily say the slide rule makes it too automatic, and if you really understood what you were doing you could construct most of your answers with a compass and straightedge. I’m sure you could probably find scathing criticisms of abacus users saying that they might be faster but they have no idea what they’re doing. A modern engineering calculator is no different, with its ability to solve various kinds of problem automatically if you know how to set them up. Not only that, but you’re putting yourself at an advantage if you’re able to use software to answer your questions. That can be something like gnu octave, or even a simulation program or overgrown equation solver made for the relevant topic.

      2. what a slide rule is anymore….

        In fairness do you know how to turn whatever plants make good fibres in your neck of the woods into those fibres, or use a drop spindle to turn fibres into threads etc. You have probably heard of the tool and understand the principle of its operation, and thus can quickly pick up the techniques if you need to but with machines that do the job better and faster… So your time is better spent finding good ways to use those fibres than doing all the manual slow work.

        Same thing with calculators (for now anyway) you still have to understand how mathematics work to structure your calculation, and could figure out how to use a slide rule or pen and paper if you had to. But being able to plug the numbers in and just get the result allows you to spend more time considering what the results mean and less trying to prove that you can in fact do the algebra.

        This “AI” and the internet in general on the other hand you don’t have to understand just copy and paste the result blindly in the hopes that post’s author (“AI” or fleshbag) actually knew what they were doing.

        1. you still have to understand how mathematics work to structure your calculation

          Increasingly not. Calculators today have symbolic solvers. Students these days can’t solve the roots of a quadratic equation without a symbolic calculator.

          The disability has creeped from arithmetics to the analytical part of math, and soon you won’t have to even formulate the equations – you just tell an AI to solve some problem, and it searches the web for the correct steps to it. In fact, the web is already full of “calculator apps” that solve astonishingly specific problems.

          1. that solve astonishingly specific problems.

            And that means you still for now actually have to have some understanding to get to that very specific problem solving one stop shop – if you don’t understand the logic and conventions of mathematics enough you can’t find that specific one stop solver for the specific problem you are having.

            I agree Calculators are getting better at doing everything for you, so it won’t be long probably (though if they will be reliable at it if you don’t understand how to structure the question). But for now at least you must have some understanding to actually get from the problem into a form you can enter into the tool and get a result.

          2. “f you don’t understand the logic and conventions of mathematics enough you can’t find that specific one stop solver for the specific problem you are having.”

            Nah, you just google for something like “beam bending calculator” and pick the one that seems to match your problem. What the AI does for you is making it easier to define the problem even if you don’t know how to describe it very accurately, and then finding the appropriate solution even if you can’t evaluate its correctness.

        2. Apparently spinning flax into thread is really difficult compared to wool. And that is assuming you have decent quality flax fibers to work with.

          When I revert to a stone age man, I’ll stick with good old reliable nudity. I’ll just walk south if it gets cold.

    2. Yeah, and they were right (if you ignore the more apocalyptic hyperbole about how bad it would be). People don’t remember navigation very well now, they can’t memorize dozens and dozens of seven-digit phone numbers anymore like they did in the 1990s, most can’t do long division without looking up how to do it… It’s a trade-off.

      I definitely can’t walk a hundred miles like my ancestors could before they had bicycles and cars. Technology never simply supplements the body–it always necessarily replaces a part of it. As it always has. We’re just now reaching levels of technology which are capable of replacing large portions of the body, or the more symbolic and emergent portions of the psyche.

      Machines aren’t merely rote repeaters of actions anymore. They are becoming a little more human, and we are becoming a little more machine. It is a significant shift, which should be confronted honestly and not denied on one side or apocalypticized on the other.

  3. “Most ingenious Thoth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

    “For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

    “You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”
    –Plato’s account of the invention of writing.

    Yeah, not a new problem, and the viable solutions seem to be different every time except in so far as they are frustrating.

    1. Yes!!!! Thank you. I was just starting to draft something about Plato and writing.

      An anecdotal story tells of some Western economist visiting China and observing the workers using shovels to dig a construction site. He asked why not machines and he was told “employment”. He then asked “why not spoons?”

      Cars mean our “horse handling” skills have all but disappeared.

      GPS means basic orienteering and navigation skills have disappeared in most people.

      Artificial fertilisers and farm equipment mean we do not need 30% of the population devoted to food growth.

      All serious issues IF there’s some societal breakdown. A total waste of time and energy if the system does not break down. And if the system does breakdown, I don’t have much comfort that the people who refused to use copilot to summarize long boring pointless documents will magically be able to become farmers and healers.

      I generally don’t like the quality of the GeAI tools right now, but if I did and the overall quality of my writing or research or communication or interpersonal skills went down noticeably my boss and/or spouse would complain mightily and take steps to encourage me to remedy that until the skills improved or I suffered a much bigger penalty.

      The world is full of penalties and rewards. Shortcuts and tools and simplifications are many many centuries old and have always had these risks which is why the Luddites smashed industrial looms. Good or bad, very few people bemoan the profound lack of weaving and spinning skills in the general populace.

      1. … which is why the Luddites smashed industrial looms. Good or bad, very few people bemoan the profound lack of weaving and spinning skills in the general populace.

        The Luddites weren’t bemoaning the lack of weaving skill in the populace. They were concerned with the up-ending of their culture and economy. Weaving went from artisanal work done in the home to factory work. It transformed people from having autonomy to being factory workers without autonomy and working in dangerous conditions because it undercut the price they could sell their goods (and their economy and culture collapsed). Don’t be so glib

        1. Every great advance in productivity means that some people can no longer survive using the old inferior methods. They have to find something else to support themselves; meanwhile there are more products available to make life better.

          Generally, factories were not terribly dangerous. Fabric mills in particular had jobs so easy that they were performed by children. That’s a great deal safer than starving to death at home.

      1. The counterpoint is that human knowledge has increased so much from Plato’s time that nobody can even hope to remember but references to some small part of it. In that sense, devoting your energy to mastering some specific wisdom leads to lesser understanding of the whole due to the limits of memory and time.

  4. AI will replace software engineers just like CAD-CAE-CAM software replaced mechanical engineers, yeah. Coders on the other hand… but then it’s your fault for being a XXI century equivalent of a guy very skilled in drawing lines with Rotring Rapidograph and not much more beyond that.

    1. In fairness English spellings are nonsense anyway – wildly inconsistent, using an alphabet that doesn’t really have the right sounds, words stolen from French, Latin, Greek, German etc. Also a very modern concept that there is only one ‘correct’ spelling – go back not very far at all and while an individual might use the same string of letters for the same word every time, they quite likely don’t too as the important thing is the letters form the sounds to match the intended word so it can be read.

      1. Unfortunately, it occasionally happens that phonetic misspellings reverse the meaning of a sentence. The worse the writer’s grammar is, the more common the reversal. English is already too ambiguous; misspellings make it worse.

      2. Spelling and grammar is used to make reading easier, and for every text which is read more often than written it is useful to put the thinking on the writing side of the text. Therefore it is nonsense to ask “how can we make the rules easier to use for the writer”, one has to ask “how has the rule to be to get a text which is most easily readable”, considering consistent writing in all texts makes reading as such easier.

  5. I’ve had a stab at getting ai to do write some fairly simple code for me, and I wasted more time trying to coax it into a semi working answer than just knuckling down and writing it myself. When I was finished I had code that didn’t quite work and I didn’t fully understand because I hadn’t really read all of it, just looked for the problems.

    My takeaway, don’t bother starting with ai, understand your own problem in hand try and write it yourself. I had much better luck feeding it what I’ve written and letting it help locate any mistakes I made. Maybe let it write out some tedious, repetative but easy bits like a long switch case based on an enum.

    In the end it’s laziness that blunts critical thinking, ai is just the excuse, though it can still be a useful tool.

    1. That’s pretty funny, my conclusion is the opposite :-)

      Start with AI Code, because it puts in all the rote stuff, then deal with the bits where it doesn’t work.

      AI has made me a much broader coder, since it comes up with solutions that I probably wouldn’t have, and it knows much more about, say, Python libraries, than I’ll ever be able to keep in my brain.

      But then, inevitably, something doesn’t work and I have to go in and figure out what it was trying to do and fix it – which requires me to understand code I didn’t write (stretch) and requires me to figure out alogrithms I didn’t come up with (big stretch) which actually makes me a better coder.

      Ironically.

      A, I’ve never tried feeding in my code to get it to help locate mistakes. I had no idea it would do that. Thanks for the tip.

    2. Honestly, i dont understand how people say AI doesnt work for coding…. Yesterday, ChatGPT-O1 made med a BTHome protocol parser with aes decryption for ESP-32 in under an hour. Last week i made the zephyr firmware for my custom NRF9160 lte-m sensor buffering data sampled every second minute, making the burffered CBOR protocol and COAP send functionality. Does it make production ready code? No in no way, but it greatly enhances what i (a hardware guy) can comfortably test out which i had not even known about how to start with before.

      1. I think the misalignment is that people mean an AI which fully replaces a human technician, which it can’t–but it’s a great supplement to a human technician, as you are using it. That just doesn’t grab the fear and imagination in the same way, though, so only measured people talk about it (and those people are usually already integrating it into their workflow).

        By the way, you should do some tutorial videos or at least show off these projects that you speak of, they sound very interesting. I’d watch it

    3. I had good success by studying the code delivered by AI, pointing out errors, making suggestions, expressing doubt about the solution proposed, and firmly holding the AI on track.
      If you don´t hold the reins, you just hit walls. I don´t know why fellow humans do not expect this.

    1. My grandfather could do complex multiplication and division in his head while us younger guys were scribbling on paper or looking for a calculator, all because he was taught those techniques in school. It’s just another example of a lost skill due to automation. I don’t understand why AI apologists bring up “muh calculators” like it’s some sort of gotcha comment when they’re just providing an example of how many of us are right.

      1. That’s always a fun parlor trick, but not everyone was as excellent as your grandfather (especially not today) and people need to do large piles of calculations, not a quick one-off to prove you can do it faster than a guy can retrieve his calculator from a drawer.

        If you were going for long-term consistency and scalability, you would choose the calculator… as people have done. Not to even get into the rabbit-hole of what is necessary to produce a generation of people like your grandfather… That kind of person doesn’t just spring up out of the ground wherever and whenever, it takes centuries to produce even a handful of people like that. But this subject gets contentious.

        1. My Granddad could add up the groceries about as fast as the clerk could enter them. Such numeracy was pretty common back then.

          It’s great to have tools that make us more efficient and accurate; it’s not so great to lose the ability to understand in broad terms the magnitudes you’re playing with. Without that, data entry or calculation mistakes propagate.

  6. Technology has been getting more and more streamlined/restrictive so you don’t have to think about stuff like configuring. Social-Media has been engineered to keep one inside bubbles that disincentivize critical thinking. Small children are subjected to overstimulative garbage ala coco melon that results in severe concentration issues.

    Cognition and Critical thinking has been in steady decline for quite a while already. To the point one could argue with some confidence that we were already f***ed in the long run anyway. With GAI being merely just another entry on the list of contributors.

    Can this be reversed? Easilly. Though people will have to want it and that is the hard part. Especially when many of these things are purposefully designed to lock you into them at a degree that makes a Toxic relationship seem healthy….

    1. Perhaps AI will solve the problem caused by social media. Soon enough, some of the ‘people’ you interact with on social media, will be AI bots. People will think that is cool, some will have fun trying to make the bots say strange things. Then, more and more ‘people’ will be AI bots, and not marked as such. They will try to befriend you, and eventually recommend ‘products’. People listen more to a friend, even an internet friend they never met, than plain ads. They may also recommend politicians to vote for. As more time pass, most of the ‘people’ online will be AIs. They will have all sorts of interests, ‘support’ all sorts of sports teams, and so on. So whatever you like, you can find a friend. And they will all quitely recommend the products & policies of whoever pays for influence. At this point, the intelligent leaves ‘social media’, and a few gullible losers hang on.

  7. I say the problem started with smart phones. My girlfriend can’t go anywhere without her GPS giving her directions. Too many people use their phones to do their thinking for them. Yes, smart devices end up making us stupid.

    1. Yep. It is all about convenience. If available, people will use it… Then start relying on it. Then can’t do without it. We only user our phone GPS when we fly into a city we aren’t familiar with. Much more ‘convenient’ than reading a paper map in the car as your driving… As we used to do. When I get home from work, the cell just sits on the counter until I need it the next day for work. Not sure I’ll keep one when I retire. Not sure worth the expense.

    2. I love Google maps, especially for getting info about somewhere new, showing bike paths, traffic, etc… but I don’t rely on it or any other app for turn-by-turn directions. The map is info, and I do the navigating.

      That’s the problem isn’t it? Surrendering control to the app or AI, instead of just using it as a tool.

      1. And trusting the ‘tool’ rather than the evidence it must be wrong at this moment – if you cease to judge the ‘instructions’ or data you are given for logical consistency to the world you can observe you get stories of folks driving into rivers…

        The one thing I actually like a smartphone map for is the ability to zoom in an arbitrary amount, and I guess the aerial/satalite view toggle can be very handy too. Otherwise give me a paper map every time – won’t run out of battery, won’t decide it knows where it is and which way is north when it doesn’t etc..

          1. Indeed, but when you really need to know if that strange feature on the normal scale map is or is not a passable route, or what the topology is over a slightly larger area to figure out LOS etc. Also the mapping tool I’ve used always keep the scale visible so while you can’t just glance at map and know it is your standard paper OS map so that inch means a predictable distance you can figure it out pretty easy.

            For instance on the regular roadmap to get to the place I just bought a second hand welder from I’d have expected to be able to approach from about 8 junctions that it turns out don’t actually connect to that block of roads inside (though some clearly did/were intended to at one point), but it wasn’t clear at that normal map scale at all. Enough to make you suspicious some of these roads might not connect, but its only that arbitrary zoom that reveals for sure.

    1. People bring up idiocracy in reference to the diminishing intelligence of humanity
      Humans are an afterthought in Wall-e, the fatties in their entertainment lounges would relate more towards the trajectory of society in relation to our diminishing vocational possibilities.
      Wall-E neglected to show the “picker class” surviving off the refuse of the “abundantly blessed”
      We are headed towards a world where the wealthy and fortunate few deemed worthy live in robotically staffed corporate communal smart cities living some variation of a Wall-E life, while everyone else is in their walled cities full of jail cell sized Universal Basic housing waiting on their Walmart provided food stamp funded bot delivered rations living the UBI life.

  8. The issue will get progressively worse, however, as the pool of knowledge is blunted by AI, the resulting more mediocre pool is used to train the next generation of AI — resulting in an increasingly rapid reduction in human capability.

    1. The “progressive” worldview (i.e. line always goes up vs. everything happens in waves or cycles, not specifically a political thing) blinds us to this; we expect that AI will continually and naturally self-enhance through use, instead of degrading… Which is not true of any other machines, or even true of humans. Humans didn’t get to where they are now by simply throwing cognition at a wall. It was a few specific and important interventions which created each large leap forward.

      Even nature would have been content with allowing all of life to remain bits of yeast drifting in water if it weren’t for cataclysms

  9. Why would I want to remember phone numbers when I don’t need to? I’d rather use that part of my brain capacity to learn musical theory, or Greek or whatever.

    Also, this is not new. If you look at a longer timeframe, these things happened all the time, for better or worse. Most people lost the skill of growing their own food. And they can’t defend themselves with a sword if someone attacks them with one. And phone numbers were relevant only for a small period of time.

    On the other hand, everything points to the same conclusion: people need to learn to think for themselves or they are in trouble – as they pretty much are and will be until they do.

  10. “…the use of GAI tools tends to shift the worker’s activity from information gathering to information verification, as well as from problem-solving to AI-output integration. Effectively the knowledge worker thus becomes more of a GAI quality assurance worker.”

    This is clear proof that GAI is currently being deployed too widely and inappropriately.

    Think of human knowledge workers – engineers, programmers, lawyers, doctors, architects, etc. The first part of their career consists of learning the rules and procedures, doing the little checks and research behind bigger jobs… learning their field, basically. Only when they have sufficient mastery of the basics are they permitted to move onto more responsible “generative” activities.

    Letting this current level of LLM do the generating is like handing control to some reality-challenged mediocrity that was educated by several years of daytime TV.

    The killer app of AI will be when it can be relied upon to VERIFY its own or other output, and to produce DEPENDABLE summaries of valid information. In other words, it’s as good as a dependable junior training in a given field. This will require the coupling of capable AI front ends to expert databases and other sources of expert information.

  11. “It is fine for us to say, ‘Well, we’ll make machines that that allow us to do certain kinds of work better.’ But when we’re told ‘Okay, that machine becomes intelligent. It has agency. And moreover, that it follows an inevitable path of history in which workers have no agency, in which workers’ only role is to retrain and move on and not to participate in the decision about how the machine will be used,’ is to make the category error that the important thing about a machine is what it does, and not who it does it for and who it does it to.” — Cory Doctorow
    https://youtube.com/shorts/SPCUmdmGviE?feature=shared

  12. I kind of did NOT want to bring this up, but you are trying to re-write the books where such topics are described in great details. Not AI per say, but information technologies (in general) and unlimited access/abuse.

    I highly recommend Neil Postman’s “Technopoly” as good starting point (technology for the technology’s sake with uncertain/unclear strategic goals), but IMHO, this is not his best book on the topic, “Building a bridge to the 18th century” is, and it is rather sad demonstration of the mentioned degraded human cognition, that the latter is ignored, probably because of the wrong keyword “18th century”. Both deserve thorough reading, and the “Building a bridge…” one, being smaller and denser with thought, merits being made into a textbook. Yes, it covers 18 century, but it also covers how 19 century dealt with the 18 century legacy, and how 20 century followed the trait as well (the book was published in 1999, so, obviously, it has nothing on 21st century, and, sadly, Neil Postman passed away in 2003).

    I won’t offer any spoilers, do your own homework. (as a side note, both books are at the level of average Sam with high school level of education, easily read and understood).

    Foreseeing potential rebuttals in lines of “he probably was too old to appreciate or understand AI or computers in general”, no he wasn’t, but his approach to such was thoughtful examination “does this make my life better/simpler/richer/etc overall”, which, as coincidence has it, is what The Amish are about, careful examination whether their lives will be better off with or without (or controlled used reserved for certain situations).

    1. Neil Postman, protege of Marshall McLuhan, both adjacent to Allan Bloom. All of them, and their successors, seem to gloss over little things like slaves supporting “literate” aspirational cultures (too many examples to list), an average life span of 30 years for most of the population for centuries, massive infant mortality, etc. etc. etc. Know what Gutenberg’s press revolutionized? Porn, trashy reading, heresy…. Yeah, bread and circuses is bad. Know what’s worse? No bread. 30% of the population less than a century ago was needed to feed the rest of the populace. Now it’s less than 1%. Technology is awesome.

      1. My reply was removed, but the gist was this – not ALL technology is bad, and one of them was artificial nitrogen fertilizer. I suppose the reply was removed because of certain keywords, but it is Haber-Bsch reaction and it produces another item loved/needed by the military. You can guess which regime used it to help its military go pass UK-controlled saltpeter trade.

        Regardless, this wasn’t the main point of my reply, which was concluding that no, I am not against artificial nitrogen fertilizers, which allowed humanity to push pass 2 billion population mark, that was speculated to be the upper limit in the beginning of the 20 century. With further advances in farming (pesticides, etc) and nearly complete mechanization, it made possible to feed additional 6 billion (as far as food distribution goes, it is a topic of completely different/unrelated thread – with the industry of plenty there WILL be hoarders and all kinds of artificial bottlenecks to drive the prices up, but the reality is thus – we ARE feeding 8 billion, going into 9 billion probably soon enough to notice).

        What changed overall is this – this is the first time in the humans’ history that we have such humongous population, which also means that there is overabundance of able-bodied and able-minded people to draw from.

        Now, keep in mind, every capital investment looks for lower costs, and this includes labor costs – this is important, because sooner or later every capital needs some kind of lowest-paid workers (ideally they’d rather NOT pay anything at all – hence, slavery, or near slavery, which is not that different – working for shelter and food IS the definition of slavery; if all you can afford is mortgage payments and food, you are already half-way there).

        Traditionally international capital looks for “exporting capital”, which is a fancy word for shipping operations overseas/south/east/north/elsewhere. Textiles in the 17 century were the prime example of such – and not the only one, just the one that’s been documented reasonably well (watch making in the 19 century is another one – that’s how Switzerland ended up being the mechanical watch “prime maker” – due to twists and turns of outsourcing watchmaking from UK to Switzerland, which at the time was the backwaters of industrial revolution).

        Gutenberg press is a large topic to cover in one paragraph, but suffice to say, it was the Catholic church that went out of its way to destroy it; but as the history had shown, people in general don’t just want to read copies of the Bible, and once out of control, it will go many ways, including “trash reading”, in addition to school textbooks (don’t forget that part – it IS important), daily news/rumors/etc. I won’t focus on other uses, they are marginal and won’t spoil the movable printing press technology, which made world-wide literacy possible, and directly contributed to driving the Gross National Products up.

        I mention Neil’s books because they are easy to read and comprehend, but there are better treaties on the subject. The point of my earlier reply was “reinventing the wheel that has been invented is not new – while the points brought forth are understood, they HAVE been thought of and discussed at length before”.

  13. “ If you cannot blindly trust a bit of complicated looking code pilfered from StackOverflow or GitHub, why would you roll with whatever ChatGPT or GitHub Copilot churns out?”

    Because on stack overflow you can see that the answer was written by OMGpuppies23, and that YOMamma2Fat disagreed and gave a different solution.

    ChatGPT usually gives you a single answer, and no attribution or evidence of disagreement.

  14. When I was hit by a car, I woke up in the hospital and I couldn’t remember anything.
    Fortunately, a girl I was seeing sort of gave me her phone number. It was in my wallet.
    They called her, she called my mother. All turned out well, but I’ve always remembered
    my phone number, address, etc. Anything that needs power be it batteries, grid power etc.
    is useless when there is no power. I’d rather rely on my memory to remember my phone number.
    No substitute for good old fashioned pencil and paper.

  15. At last! – Someone NOT jumping on the AI bandwagon – well considered and thought out…

    I have said for a while that using less skilled staff + “AI” removes the opportunity for innovative and insightful staff to use their skills to improve their own skills and their results or productivity as the AI provides no way to integrate their innovation or ideas, keeping skills down to the level required – rather than letting them flourish – the same as is happening in schools…

    1. Well, when you learn a trade like programming, it’s like construction. AI will never physically be able to build a house. It seems today, a lot of youth can’t seem to do anything without their phones.
      Now it seems people can’t do anything without asking Google, or looking at their phone.
      Critical thinking and other life skills are going by the wayside because, why not, we don’t have
      to think anymore, we can just use AI, or Google to get the answer instead of working out a problem
      for ourselves. We have whole generation who have never known a world without computers or the internet. Me, I’m considered old fashioned because I rely more on pencil and paper, and this wet
      thing between my ears. There are kids and even adults out there who can’t even do basic math,
      or make change. I’m not talking long division or calculus. I’m talking the basic math skills you start
      learning in elementary school. Now with AI, kids can just ask Alexa for the answer.
      In a way, it’s sad, but this is the world we live in today.

    1. The quality of television over the years and decades in my humble opinion has definitely declined.
      I remember a new show called “The Jerry Springer Show”. That was one of the worst.
      I watched one show, and all I saw were that arguments were encouraged, fistfights and violence…
      Yep, there’s good quality television right there. Can you imagine what AI would come up with if
      it were trained on garbage like that?

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.