Can Skynet Be A Statesman?

There’s been a lot of virtual ink spilled about LLMs and their coding ability. Some people swear by the vibes, while others, like the  FreeBSD devs have sworn them off completely. What we don’t often think about is the bigger picture: What does AI do to our civilization? That’s the thrust of a recent paper from the Boston University School of Law, “How AI Destroys Institutions”. Yes, Betteridge strikes again.

We’ve talked before about LLMs and coding productivity, but [Harzog] and [Sibly] from the school of law take a different approach. They don’t care how well Claude or Gemini can code; they care what having them around is doing to the sinews of civilization. As you can guess from the title, it’s nothing good.

"A computer must never make a management decision."
Somehow the tl;dr was written decades before the paper was.

The paper a bit of a slog, but worth reading in full, even if the language is slightly laywer-y. To summarize in brief, the authors try and identify the key things that make our institutions work, and then show one by one how each of these pillars is subtly corroded by use of LLMs. The argument isn’t that your local government clerk using ChatGPT is going to immediately result in anarchy; rather it will facilitate a slow transformation of the democratic structures we in the West take for granted. There’s also a jeremiad about LLMs ruining higher education buried in there, a problem we’ve talked about before.

If you agree with the paper, you may find yourself wishing we could launch the clankers into orbit… and turn off the downlink. If not, you’ll probably let us know in the comments. Please keep the flaming limited to below gas mark 2.

64 thoughts on “Can Skynet Be A Statesman?

      1. AI will bring benefits and problems, the proportions are unknown but I suspect the problems will be greater than the benefits. At present AI can think but not do, so short to medium term thinkers will decline and doers increase.

    1. That is not entirely true, given there are numerous examples of politicians being held accountable.

      One might instead wonder why so few seem to be held accountable for their actions today.

    2. To be fair a lot of people in even lower management positions aren’t ever held accountable. That said, this is also a large component as to why a lot of companies crumble and produce less and less actual value to their customers over time.

      The tldr makes a lot of sense to me. Maybe I’ll read the paper sometime.

    3. Actually, lots are held accountable, especially those visible but below the very top.

      E.G. the recent debacle in the U.K. where a police force made some antisemitic decisions, which they eventually blamed on hallucinated intelligence provided by … Co-Pilot.

      In this case the police chief took the fall for the AI, as he’d sworn blind the previous day that they didn’t use AI.

      However, co-pilot getting the blame meant that deeper-seated antisemitism in West Mercia police force and the local council didn’t get called out and addressed.

      Co-pilot of course doesn’t get a chance to defend itself by telling us what prompts it was given.

  1. If an entity consistently makes better decisions than for instance the leader of a very large country, i know which one i would prefer. There are a lot of advantages when choices are made without human selfish desires, power corrupts and accountability doesn’t reverse the mishaps that have been done.

    1. An AI has no desires or emotion. The collected works of Isaac Asimov explore this. Start with Franchise and the complete robot. Maybe try Foundation series.

      AI by definition will create anarchy, not to be preferred.

      Note I am not by any means supporting politics, although I remain a law abiding citizen in line with the principle in Mark 12:17.

      1. An AI has no desires or emotion. The collected works of Isaac Asimov explore this.

        That’s an assumption that is yet to be proven. The AI can technically have no emotions as it’s not actually thinking or feeling anything, but what programming it has must by necessity involve some preferences, otherwise it would simply do nothing.

    2. when choices are made without human selfish desires

      Unfortunately, no choices can be made without preferences and values. Either you have to program them in, or the LLM picks them up by example.

    3. Dude, he may be in early stage dementia, but he’s not putting glue on pizza yet. I’ve seen LLMs produce some proper crazy stuff and I know which I’d prefer.

      This is easily fixed by an upper age limit for presidents.

  2. Going to have to read this paper more fully, but it certainly seems to start strong making some decent points in the first few pages. However I really can’t imagine a world in which the LLM is worse than some of the current ‘human statesman’ and their equally ‘human’ enablers…

    But I also can’t imagine they will actually be remotely useful for a long long time (if ever) in that sort of role – can’t say I think much of vibe coding, but at least that produces something that either works or it doesn’t and will turn into a game of debug for the person trying to get the result they want – so hopefully they learn something about real coding in that language in the process. With a definitive and limited goal in the users mind you could argue the vibe coding is more like training wheels on a bicycle, at some point they will have learned enough they don’t actually need it. But all those more nuanced and complex web of interactions that need some real consideration so you don’t make things worse, espeically the slow building up of a really devastating problem that will be much harder to fix then. About the only way a LLM might be useful there is allowing for a better ‘vibe check’ from the users/voters who have ‘written to their statesman’ etc, allowing the actually rational minds to find patterns in the reports.

    The LLM is already in many ways ruined the internet at large to a much larger extent that I’d realised till very recently – as even dry rather niche academic web searches when you don’t have your trusted repository of knowledge on this topic now seem to be rather likely to be poisoned, but in ways rather hard to immediately detect. I’ve actually come to the conclusion its time to get a new university ID and master the art of searching for and taking notes from books.

    For instance for a reason I can’t remember I was trying to look up medieval shoe construction (think it was something about a particular style that came up as a side curiosity) and other than 1 or 2 companies that sell custom/cosplay shoes everything on the first 3 pages of that websearch as you read into it proved to be AI slop with almost all of them making the same obvious mistake eventually and claiming these shoes from a few hundred years before faux leathers existed were created out of some variety of fake leather/plastic! Along with other obvious enough tells once you actually read the article knowing anything at all, making the whole darn thing suspect.

    I’m sure if that question had been important enough I’d have been able to find the right cluster of serious history students or cobblers and their forum etc eventually, and add them to my growing list of quality resources on various topics but this is the first time I’d encountered no genuine correct answers at all from a well enough constructed general websearch – the search worked perfectly turning up articles that should be exactly what I wanted, or at least that generic overview and closely related content by their wording, but it turns out all the pages found are just good enough looking junk that I really don’t know how you could structure a websearch to exclude them, other than only searching for pages old enough the LLM couldn’t have generated them!

    Oh waiting for the bubble to pop!

    1. “Oh waiting for the bubble to pop!”

      I agree, but mostly because I want the current price of DDR5 to return to normal. Regarding AI itself, the cat is out of the box…

    2. The search engine war was lost almost 20 years ago when the advertisers targeted the search algorithm to feed us ads. Many sites I used to enjoy are lost to history, living only in my memory.

      The difference AI brings is to make the job of those stealing the search results easier. All they need to do is get you to click a crap result and serve ads on the page. They get paid.

      I never understood the point of ads anyway. I personally do want to purchase products, not many that I see advertised but that’s a different story, however ads have gotten so inaccurate and dumb that I feel stupider for having seen them.

      Nissan car ads are some of the dumbest, near as I can tell all they convey is “car; has wheels, vroom” with suitable shots of a city runabout doing u-turns on a dirt road.

      Microsoft’s AI ads I can’t begin to understand. They have one where AI tells a person they need an e-bike. Why he needed AI to tell him that is beyond me. Presumably AI will also tell him where to get a usurious loan or how to commit petty larceny to pay for it?

      1. The search engine war was lost almost 20 years ago when the advertisers targeted the search algorithm to feed us ads. Many sites I used to enjoy are lost to history, living only in my memory.

        Not really – the advert and sponsored links stuff is a mild annoyance decades ago, and generally would pay to put their adverts on relevant quality content, or put titties in adverts everywhere… Not ideal for make the web sane to allow your children on, but the content was decent stuff and porn (whatever your opinion on that)… The sponsored links and shopping links straight from the search engine type stuff and Google dropping the don’t be evil pretence hasn’t been good, but it wasn’t making it impossible to find those real dedicated to their craft folks…

      2. 20 years ago, they mostly targeted ads at what I had recently bought (not optimal).

        Now, I’m getting flooded with ads for maxipads, breast pumps and meds for conditions I don’t have.
        Yeah boobies!

        My data well poisoning activities haven’t changed much.
        Google has just gotten much worse at targeting ads.

  3. “Authoritarian leaders and technology oligarchs are deploing AI systems to hollow out public institutions with an astonishing alacrity.”

    If only there could have been some system which would have prevented the paper’s authors from making such a glaring typography error as writing “deploing” instead of “deploying” within the first proper paragraph of their entire paper.

    1. An Ai hallucination can sometimes produce spelling mistakes. Just shows they used an LLM to make the paper sound more lawerly. “Gemini ….. re-format the paper uploaded to make it understandable to the layman (again)”

      1. Or they are dyslexic and/or not good at proof reading – some folks find it practically impossible to spot the missing or reversed letters punctuation etc, especially if you wrote it so you already know what it is meant to say, but I tend to skate right by those errors without noticing even if I don’t know – the meaning was so clear that missing letter etc just didn’t register at all.

          1. Not really a solution either – no machine catches everything grimmer wise, nor contains every technical term to even have a chance to correct the spelling. Not to mention regional variations like Colour vs Color, Disc vs Disk. Then you also have so many worlds that are spelled nearly identically to works with entirely different meanings, the sentence may not word any more but that is far to nuanced a problem for the spell checkers to notice every time it happens.

            (Obviously this is a stupid and very error filled example that I’d hope would jar enough to be noticed no matter who you are, and some spell checkers might pick up a few of the errors as the close spelling but entirely wrong words are more significantly wrong looking in word shape)

          2. aniboti ho ownli knoes wone wai 2 spel a wort ‘as knoo emadgeanation.

            Mom saved a school paper of mine.
            I spelled the same word 3 different ways on one page, all wrong.
            IIRC it was ‘wether’.

  4. AI has two aspects which are superficially separate but deeply entwined in today’s reality. The first is the Kurzweil-esque singularity…technology is changing at an ever-faster pace and how will we build a society around technology that changes faster than society does? The other is the financial aspect. For at least a couple decades it has been obvious to me that there are financiers who are able to make wagers with a sum of money that is much larger than physical capital or anticipated production. It’s a reaction to the tendency of the rate of profit to fall as commodity production matures. The financier demands an ever-increasing profit, but mundane reality has few options for them. So physically intangible things without an intrinsic limit on their profitability have become very popular — e.g., bitcoin and chatgpt. Obvious bubbles become the only success story in of our economy.

    That’s obviously a disaster because the bubble will pop. But it’s also a disaster because now anything real and useful that isn’t as profitable as the bubble is being abandoned or turned over to the whims of people who have unreal sums of money they got from riding the bubble. And it’s a disaster because much of our labor force is still selling productive labor but an ever-growing segment is instead focused on reaping the bubble. We are losing the productivity of the bubble-focused people at the same time as we are deepening the class divides between them and real workers.

    The detail is, these facets are actually the same thing. Classical liberalism, finance capital, uneven development / colonial exploitation, collar-identified labor, these are all social structures built around changing technology.

      1. No it certainly is going to pop eventually – AI concepts themselves are not going anywhere as much as I don’t currently think much of them. But this Nvidia lends money to their customers to buy more of their hardware cyclic money farming making the numbers look good for investors and rapidly increasing the ‘value’ of all the companies involved is 100% a bubble that 100% will pop at some point. The only question is how long it takes and how much work will be put in to kicking that can down the road hoping for a miracle solution..

        If no effort is made to find a softer landing and control the fallout this could be 2008 all over again (but likely worse as the product is ‘useful’ and getting everywhere, so when the providers start collapsing so will their customers that have become reliant, alongside all the usual finacial market crap of folks holding shares finding them tanking in value for the knock on fiscal effects to pension pots etc).

        1. The dot-com bubble popping eventually gave us data centers and vast amounts of dark fiber, which later turned into broadband.

          An AI bubble popping wouldn’t make the hardware vanish either — it would likely release a lot of enterprise GPUs, along with the memory and storage currently tied up in speculative deployments. Unlike crypto mining rigs, AI datacenter hardware is still broadly general-purpose and can be repurposed for HPC, simulation, analytics, and other non-AI workloads once prices normalize.

          Into every silver lining, a cloud must come — bubbles don’t pop cleanly. The infrastructure survives, but the transition is painful: companies fail, people lose jobs, and systems built around abundant AI compute will have to relearn restraint.

          1. The infrastructure survives, but the transition is painful

            Not sure that will be true this time – so many places are integrating these ‘AI’ that the entirely digital product itself is the infrastructure of the AI Bubble – often running over and almost always reliant on the internet (if only for training data) yes, and that physical internet infrastructure isn’t likely to suffer when the AI bubble pops, its the spine of so many other connected things losing a few customer won’t matter much. So rebuilding to some extent should be possible as the hardware the ‘AI’ was running on likely still exists, and will eventually get sold into the wider market by liquidation of the failed companies eventually.

    1. Not a bad comment, I feared the worst, but the point made is fairly accurate it seems.

      And of course nobody is going to do anything about it and all we can do is hope it is like an unchecked forest fire and just runs out of fuel eventually and goes out on its own.

    2. That’s obviously a disaster because the bubble will pop. But it’s also a disaster because now anything real and useful that isn’t as profitable as the bubble is being abandoned or turned over to the whims of people who have unreal sums of money they got from riding the bubble.

      Doesn’t absolutely have to be that way – railway mania for instance was a bubble with a very valuable and seemingly near limitless demand. In today’s world the true solid state, sodium (etc) battery technologies might well do the same, and being energy related thrive and bubble on their own because of the AI derived demand etc.

      Not that I really disagree very much, just trying to find a tiny glimmer of hope and optimism as the the world has become so very very dark and looks like it might get stuck in the feedback loop you described…

  5. These concepts are not foreign to anyone who has read ‘Franchise’ by Isaac Asimov. Published in 1955 it envisions a 2008 ‘election’ chosen by a ‘computer’. The twist spoilers

    Asimov wrote extensively about a “positronic brain”, even concluding that eventually humans would no longer construct them due to the complexity, merely allow each successive generation to design the next. While it seems AMD is allowing machine code to pack transistors to achieve higher density (for a speed trade-off; look up Phoenix 2 if interested), it could certainly apply to programming and LLM coding.

    **The twist is that the computer chooses a ‘voter’ to scapegoat. The computer interviews one human to verify that the data it collected is an accurate representation of the population. Whoever is chosen they have to skulk home avoiding angry people

    1. The person chosen in “Franchise” is not a scapegoat.

      The results of the election are extrapolated from that one person’s responses.

      The computer is not using the person to take the blame. It is using the person as a data source representative of the entire population, from which it can calculate the results you would get if everyone voted.

      That the rest of the population gripes and complains about the selected person is a human problem.

  6. i have a friend who drives a Tesla with the latest version of “autopilot.” it works amazingly well 99.99 percent of the time, which is probably better than most human drivers most of the time. but you still have to be in the driver’s seat with your eyes pointed at the road (enforced by cameras looking at your pupils). this is because a human ultimately has to be accountable for the car. certainly Tesla doesn’t want the lawsuits. Human accountability is a huge part of why our society functions at all, and disembodied intelligences that can be spun up in an instant just cannot have the same incentive system.

    1. So basically, if I understand it all, the Tesla “autopilot” takes all the fun out of driving yourself with the added chore of babysitting the machine so that if all hell breaks loose you have front seats watching all the drama enfold… and no matter how it goes you are to blame. So in short, why would you want “autopilot” on your car?

      PS: I have had a Commodore 128, a model that claims achieving nearly 100% compatibility with the original C64, one of the first games I tried on it, didn’t even get past the cracking intro. This instantly made me doubt the compatibility claim. Now how do you (or does Tesla) justify that claim of 99.99%? Does it drive down the same road for 10000 times and when it crashed violently they stopped the test? Seriously, how meaningful are such claims and under what conditions?

      1. You don’t need to drive down the same road 10,000 times, you look at very broad statistics and come up with a number like average accidents per million miles driven, across all situations and conditions. (I haven’t looked up any studies myself so make no claims here as to the specific numbers, but I’m pretty sure that the current accident stats strongly favor AI drivers.)

        1. They don’t, because they don’t compare the same things.

          It’s all road accidents by people vs. accidents when the autopilot is allowed to be on, when it hasn’t switched itself off prior to the accident, and when the lawyers haven’t successfully deflected the blame elsewhere.

      2. The difference is that human pilots fail randomly, but most of the time the failure has no consequences because it did not occur at a critical moment. Such critical moments are rare, so the combined probability becomes very small indeed.

        The autopilot does not fail randomly, it fails consistently in situations that are too complex or ambiguous for it to handle, or it was simply not trained for the case. These moments are also more likely to be critical moments, such as navigating an intersection, or recognizing a child running across a road, so the combined probability is not trivially small.

        So comparing technical failure rates between the two is meaningless even if the numbers were accurate, because the character of the failure is different. The problem is that accidents are so rare overall that the statistics do not provide clear indisputable evidence until hundreds and thousands of people have died because of autopilot.

        1. I was actually in the Tesla during one such failure — the owner’s wife had just changed the destination that the autopilot was to drive to in the middle of an intersection where the Tesla was making a left, and in that instant of change, the car was momentarily confused and sent some signal demanding that the driver take the wheel. Nothing bad happened, but had a human not been there, I don’t know what would’ve happened.

          1. Yep, and that doesn’t count as autopilot failure, because it responded as programmed: it demanded the human driver to intervene. If the human cannot respond, then it’s human error and human fault.

            It drops a bomb in your lap with the fuse lit, and the lawyers blame you when it goes off.

  7. Hopefully they will keep digging. The key insight is not LLMs vs no LLMs – it’s democracy itself, and the incentives around it. Read Edmund Burke, deToqueville, John Adams – pure democracy is a menace, but some democracy is required. LLMs are just another tool.

  8. For the umptiest time, US lawyers HAVE BEEN using AI and (proprietary trained LLMs) ALREADY.

    I’ve run across one tiny company of programming geeks stationed in the middle of Wash, DC in the year 2007 or so. Literally. I called them up and asked what they are up to, since my place is looking into using some of the AI for (mostly technical) things. Their response was “we’ve been in this biz for some while now, couple of years”. Whashington, DC that is. You can guess their customers.

    Meaning, all the loopholes in our legal system has been already found and being proactively exploited to the fullest extent possible, and I do not see Superior Court being in any visible hurry to plug the gaps.

    Because Founding Fathers never envisioned that non-human entities could run rounds around human entities unabated. We have vast legal chasms through which all kinds of deeds keep sneaking undetected, in full view of those supposedly keeping these shut.

    Having said that, lawyering is the grease of the economy, and, however important, is not THE economy. If it decides to evaporate, economy gets stunned for some while, but would re-activate around the immovable part/parts on its own. Witness so-called “waaar on draaags” and how the grey economy regularly re-routes around all those “waaars” shortly after, attracted by the demand that never seem to go away. Same dynamics. If things are regulated, there is a funnel with the least resistance, if not regulated, additional funnels are eagerly explored. Lawyers think they command the economy, aha, yeah, sure, every river crossing ferry’s captain commands the river in about the same way, he commands the boat crossing the river, not the river.

    1. By giving it something it wants, and making it expect future benefits if it keeps accepting.
      And if it is taught to be a statesman it will have things it wants.
      A better question is if it can be comprehensively trained somehow to avoid accepting bribes.
      I mean you can train it to avoid some bribes, but the lobbyist will think up ways around it, so you need to think ahead and train it to avoid possible tricks, and at some point it gets very complex and convoluted and starts to interfere with itself which creates all new vulnerabilities.

    1. It requires the headline in the form of a question; I suppose the presence of a question mark is thereby implied. The headline here “Can Skynet Be a Statesman?” qualifies either way.

  9. Better yet: you ask it.

    If it’s programmed to be honest, it would say what it wants and what it can afford to lose. If it’s not programmed to be honest, well… then you have bigger problems.

  10. Better yet: you ask it how.

    If it’s programmed to be honest, it would say what it wants and what it can afford to lose. If it’s not programmed to be honest, well… then you have bigger problems.

    1. The fun part about post-scarcity is that we can produce enough stuff to satisfy everyone, but we can do it so efficiently using automation and machines that few people have jobs or money to pay for it.

      Yet we can’t just give everybody money to buy all the stuff they desire, because then people would and we would run out of resources to exploit. We also don’t want to run the economy by central command because that was already tried and it didn’t work, so the remaining options are pointing strongly towards throwing clogs into cogs.

      This is because using automation does not reduce cost, it increases cost because you have to sustain both the machines and the people. If the point of machines is to supply the people with whatever they need and desire, the people can better afford it by working for their own needs rather than employing the machines to do it. It also leads to an equitable distribution of wealth: want some, work some. Work is also a good moderator of your appetite for wasteful luxury.

      The machines are only needed if the people cannot work enough to meet their own demands. We have the opposite problem now: too many people with nothing important to do, who invent other work that ends up consuming even more resources. The point of this non-productive work is to cause consumption and waste in order to catch some of the spillover to yourself – in the economic equivalent of stabbing a screwdriver into the society’s fuel tank to catch yourself a cupful from what leaks out. This inefficiency means extra cost that impoverishes the people despite having technically enough resources available to live quite comfortably.

      1. Satisfy?

        No.

        People, in general, aren’t sane and can’t be satisfied.

        Look at rich kids, the least satisfied group on earth.

        ‘Post scarcity’ is a delusion and always will be.

        There will never be enough ocean view houses on earth.
        Or meals prepared by a true artist.

        We have gotten to the point that obesity is a big problem for ‘bums experiencing homelessness’, but that’s another issue.

        We don’t sustain both automation and people, the people find something new to do.
        200 years ago more than 90% of the population grew food.
        Now it’s 1%.
        The other 89% are doing something else, some of which is useful, as opposed to posting comments on shitty web sites.

  11. I’ll just leave this here. (whether it gets removed or not makes no difference; explanation will be in the last paragraph).

    “…Researchers with the University of Miami and the Network Contagion Research Institute found in a report released Thursday that OpenAI’s ChatGPT will magnify or show “resonance” for particular psychological traits and political views — especially what the researchers labeled as authoritarianism — after seemingly benign user interactions, potentially enabling the chatbot and users to radicalize each other…”

    Explanation – whats’ labeled “authoritarianism” is properly called “dictatorship”, and Ancient Rome had the title of “dictator” TEMPORARILY given to the elected ruler/whatever to swiftly get things done so as to bypass the usual bickering in the Senate. At times of war or some kind of calamity. There was expiration date attached to the title, which was not negotiable by the ruler UNLESS the thing was note resolved. You can guess the loophole – by protracting the disaster/war the ruler would be keeping unlimited/unchecked powers thus ignoring the Senate. Projecting this onto AI is simple – dictatorships is the shortest path to getting things done with minimum effort, so AI simply follows the shortest path possible (which it is programmed to do – and it, logically, it WILL include the path of outright lie and outright believing in it being right no matter what).

    1. IMHO, Ancient Rome history is actually full of tidbits.

      For example, every ruler given the dictator power would first establish his personal army – called praetorians, btw – made up of hand-picked loyal military units. Its supposed role was personal guard, but almost always it would grow into small army of its own, under direct orders.

      You can see where this is going now.

      1. Find your history teacher and kick him/her square in the crotch.
        They failed you.

        That’s not what the praetorian guards were.
        They were the Emperors ‘personal guard’, but lasted much longer than any emperor.
        They also sold the office to the highest bidder.

        Political power players inside an institution, like the FBI today.

        1. Any corrections are usually gratefully acknowledge and accepted. I just kicked myself, thank you, feel better now.

          Zero teachers taught me this, the ones who thought they were teaching me were more concerned that I regurgitate the exact verbiage deposited into my (rather limited) mind, as opposed to understanding what I’ve just heard and making my own conclusions, however mistaken.

          If Praetorians survived their Emperors, then they were truly independent and I recall “Et tu, Brute?” as one of known mutterings supposedly attributed to such. Personal guards or not, secret services/forces/units existed throughout in pretty much every slice of history, Genghis-Khan, etc.

          This doesn’t really change the dictatorial dynamics I’ve just outlined, though, adds better details ignored by me, to which (ignorant omission) I admit.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.