There’s been a lot of virtual ink spilled about LLMs and their coding ability. Some people swear by the vibes, while others, like the FreeBSD devs have sworn them off completely. What we don’t often think about is the bigger picture: What does AI do to our civilization? That’s the thrust of a recent paper from the Boston University School of Law, “How AI Destroys Institutions”. Yes, Betteridge strikes again.
We’ve talked before about LLMs and coding productivity, but [Harzog] and [Sibly] from the school of law take a different approach. They don’t care how well Claude or Gemini can code; they care what having them around is doing to the sinews of civilization. As you can guess from the title, it’s nothing good.

The paper a bit of a slog, but worth reading in full, even if the language is slightly laywer-y. To summarize in brief, the authors try and identify the key things that make our institutions work, and then show one by one how each of these pillars is subtly corroded by use of LLMs. The argument isn’t that your local government clerk using ChatGPT is going to immediately result in anarchy; rather it will facilitate a slow transformation of the democratic structures we in the West take for granted. There’s also a jeremiad about LLMs ruining higher education buried in there, a problem we’ve talked about before.
If you agree with the paper, you may find yourself wishing we could launch the clankers into orbit… and turn off the downlink. If not, you’ll probably let us know in the comments. Please keep the flaming limited to below gas mark 2.

Not a hack…
This comment on every post that you don’t seem worthy to be a hack becomes a bit boring and monotinic…
It is very interesting to most of the hacker-minded people in general.
Perhaps not, but, it’s a damn good warning.
Wordcel lawyer types are paranoid about AI replacing them, I hope it does, but the rest of us shape rotators are safe for now.
Based. Automate all lawyers, politicians, and CEOs. We don’t need ’em.
And similarly a politician can never be held accountable…yet he makes management decisions.
That is not entirely true, given there are numerous examples of politicians being held accountable.
One might instead wonder why so few seem to be held accountable for their actions today.
They are held accountable at the next election. The problem is that malfeasance is forgiven by the people who regard them as on their ‘side’.
Yes, at the next election, where their close friend gets elected instead.
Congrats, you fell for it again!
To be fair a lot of people in even lower management positions aren’t ever held accountable. That said, this is also a large component as to why a lot of companies crumble and produce less and less actual value to their customers over time.
The tldr makes a lot of sense to me. Maybe I’ll read the paper sometime.
If an entity consistently makes better decisions than for instance the leader of a very large country, i know which one i would prefer. There are a lot of advantages when choices are made without human selfish desires, power corrupts and accountability doesn’t reverse the mishaps that have been done.
Yet can just replace the current deep state with AI. What can go wrong there?
An AI has no desires or emotion. The collected works of Isaac Asimov explore this. Start with Franchise and the complete robot. Maybe try Foundation series.
AI by definition will create anarchy, not to be preferred.
Note I am not by any means supporting politics, although I remain a law abiding citizen in line with the principle in Mark 12:17.
That’s an assumption that is yet to be proven. The AI can technically have no emotions as it’s not actually thinking or feeling anything, but what programming it has must by necessity involve some preferences, otherwise it would simply do nothing.
Unfortunately, no choices can be made without preferences and values. Either you have to program them in, or the LLM picks them up by example.
Going to have to read this paper more fully, but it certainly seems to start strong making some decent points in the first few pages. However I really can’t imagine a world in which the LLM is worse than some of the current ‘human statesman’ and their equally ‘human’ enablers…
But I also can’t imagine they will actually be remotely useful for a long long time (if ever) in that sort of role – can’t say I think much of vibe coding, but at least that produces something that either works or it doesn’t and will turn into a game of debug for the person trying to get the result they want – so hopefully they learn something about real coding in that language in the process. With a definitive and limited goal in the users mind you could argue the vibe coding is more like training wheels on a bicycle, at some point they will have learned enough they don’t actually need it. But all those more nuanced and complex web of interactions that need some real consideration so you don’t make things worse, espeically the slow building up of a really devastating problem that will be much harder to fix then. About the only way a LLM might be useful there is allowing for a better ‘vibe check’ from the users/voters who have ‘written to their statesman’ etc, allowing the actually rational minds to find patterns in the reports.
The LLM is already in many ways ruined the internet at large to a much larger extent that I’d realised till very recently – as even dry rather niche academic web searches when you don’t have your trusted repository of knowledge on this topic now seem to be rather likely to be poisoned, but in ways rather hard to immediately detect. I’ve actually come to the conclusion its time to get a new university ID and master the art of searching for and taking notes from books.
For instance for a reason I can’t remember I was trying to look up medieval shoe construction (think it was something about a particular style that came up as a side curiosity) and other than 1 or 2 companies that sell custom/cosplay shoes everything on the first 3 pages of that websearch as you read into it proved to be AI slop with almost all of them making the same obvious mistake eventually and claiming these shoes from a few hundred years before faux leathers existed were created out of some variety of fake leather/plastic! Along with other obvious enough tells once you actually read the article knowing anything at all, making the whole darn thing suspect.
I’m sure if that question had been important enough I’d have been able to find the right cluster of serious history students or cobblers and their forum etc eventually, and add them to my growing list of quality resources on various topics but this is the first time I’d encountered no genuine correct answers at all from a well enough constructed general websearch – the search worked perfectly turning up articles that should be exactly what I wanted, or at least that generic overview and closely related content by their wording, but it turns out all the pages found are just good enough looking junk that I really don’t know how you could structure a websearch to exclude them, other than only searching for pages old enough the LLM couldn’t have generated them!
Oh waiting for the bubble to pop!
“Oh waiting for the bubble to pop!”
I agree, but mostly because I want the current price of DDR5 to return to normal. Regarding AI itself, the cat is out of the box…
The search engine war was lost almost 20 years ago when the advertisers targeted the search algorithm to feed us ads. Many sites I used to enjoy are lost to history, living only in my memory.
The difference AI brings is to make the job of those stealing the search results easier. All they need to do is get you to click a crap result and serve ads on the page. They get paid.
I never understood the point of ads anyway. I personally do want to purchase products, not many that I see advertised but that’s a different story, however ads have gotten so inaccurate and dumb that I feel stupider for having seen them.
Nissan car ads are some of the dumbest, near as I can tell all they convey is “car; has wheels, vroom” with suitable shots of a city runabout doing u-turns on a dirt road.
Microsoft’s AI ads I can’t begin to understand. They have one where AI tells a person they need an e-bike. Why he needed AI to tell him that is beyond me. Presumably AI will also tell him where to get a usurious loan or how to commit petty larceny to pay for it?
Not really – the advert and sponsored links stuff is a mild annoyance decades ago, and generally would pay to put their adverts on relevant quality content, or put titties in adverts everywhere… Not ideal for make the web sane to allow your children on, but the content was decent stuff and porn (whatever your opinion on that)… The sponsored links and shopping links straight from the search engine type stuff and Google dropping the don’t be evil pretence hasn’t been good, but it wasn’t making it impossible to find those real dedicated to their craft folks…
a politician thinks about the consequences for the next election, a statesman thinks about the consequences for the next generation
“Authoritarian leaders and technology oligarchs are deploing AI systems to hollow out public institutions with an astonishing alacrity.”
If only there could have been some system which would have prevented the paper’s authors from making such a glaring typography error as writing “deploing” instead of “deploying” within the first proper paragraph of their entire paper.
An Ai hallucination can sometimes produce spelling mistakes. Just shows they used an LLM to make the paper sound more lawerly. “Gemini ….. re-format the paper uploaded to make it understandable to the layman (again)”
Or they are dyslexic and/or not good at proof reading – some folks find it practically impossible to spot the missing or reversed letters punctuation etc, especially if you wrote it so you already know what it is meant to say, but I tend to skate right by those errors without noticing even if I don’t know – the meaning was so clear that missing letter etc just didn’t register at all.
F7
Not really a solution either – no machine catches everything grimmer wise, nor contains every technical term to even have a chance to correct the spelling. Not to mention regional variations like Colour vs Color, Disc vs Disk. Then you also have so many worlds that are spelled nearly identically to works with entirely different meanings, the sentence may not word any more but that is far to nuanced a problem for the spell checkers to notice every time it happens.
(Obviously this is a stupid and very error filled example that I’d hope would jar enough to be noticed no matter who you are, and some spell checkers might pick up a few of the errors as the close spelling but entirely wrong words are more significantly wrong looking in word shape)
AI has two aspects which are superficially separate but deeply entwined in today’s reality. The first is the Kurzweil-esque singularity…technology is changing at an ever-faster pace and how will we build a society around technology that changes faster than society does? The other is the financial aspect. For at least a couple decades it has been obvious to me that there are financiers who are able to make wagers with a sum of money that is much larger than physical capital or anticipated production. It’s a reaction to the tendency of the rate of profit to fall as commodity production matures. The financier demands an ever-increasing profit, but mundane reality has few options for them. So physically intangible things without an intrinsic limit on their profitability have become very popular — e.g., bitcoin and chatgpt. Obvious bubbles become the only success story in of our economy.
That’s obviously a disaster because the bubble will pop. But it’s also a disaster because now anything real and useful that isn’t as profitable as the bubble is being abandoned or turned over to the whims of people who have unreal sums of money they got from riding the bubble. And it’s a disaster because much of our labor force is still selling productive labor but an ever-growing segment is instead focused on reaping the bubble. We are losing the productivity of the bubble-focused people at the same time as we are deepening the class divides between them and real workers.
The detail is, these facets are actually the same thing. Classical liberalism, finance capital, uneven development / colonial exploitation, collar-identified labor, these are all social structures built around changing technology.
The ‘bubble will pop’ is not a boolean. Please revise your comment with a probabilty. I’m giving it 32% pop myself, + or – 10%.
If it does not pop it isn’t a bubble now is it Drone Enthusiast.
No it certainly is going to pop eventually – AI concepts themselves are not going anywhere as much as I don’t currently think much of them. But this Nvidia lends money to their customers to buy more of their hardware cyclic money farming making the numbers look good for investors and rapidly increasing the ‘value’ of all the companies involved is 100% a bubble that 100% will pop at some point. The only question is how long it takes and how much work will be put in to kicking that can down the road hoping for a miracle solution..
If no effort is made to find a softer landing and control the fallout this could be 2008 all over again (but likely worse as the product is ‘useful’ and getting everywhere, so when the providers start collapsing so will their customers that have become reliant, alongside all the usual finacial market crap of folks holding shares finding them tanking in value for the knock on fiscal effects to pension pots etc).
Not a bad comment, I feared the worst, but the point made is fairly accurate it seems.
And of course nobody is going to do anything about it and all we can do is hope it is like an unchecked forest fire and just runs out of fuel eventually and goes out on its own.
Doesn’t absolutely have to be that way – railway mania for instance was a bubble with a very valuable and seemingly near limitless demand. In today’s world the true solid state, sodium (etc) battery technologies might well do the same, and being energy related thrive and bubble on their own because of the AI derived demand etc.
Not that I really disagree very much, just trying to find a tiny glimmer of hope and optimism as the the world has become so very very dark and looks like it might get stuck in the feedback loop you described…
These concepts are not foreign to anyone who has read ‘Franchise’ by Isaac Asimov. Published in 1955 it envisions a 2008 ‘election’ chosen by a ‘computer’. The twist spoilers
Asimov wrote extensively about a “positronic brain”, even concluding that eventually humans would no longer construct them due to the complexity, merely allow each successive generation to design the next. While it seems AMD is allowing machine code to pack transistors to achieve higher density (for a speed trade-off; look up Phoenix 2 if interested), it could certainly apply to programming and LLM coding.
**The twist is that the computer chooses a ‘voter’ to scapegoat. The computer interviews one human to verify that the data it collected is an accurate representation of the population. Whoever is chosen they have to skulk home avoiding angry people
The person chosen in “Franchise” is not a scapegoat.
The results of the election are extrapolated from that one person’s responses.
The computer is not using the person to take the blame. It is using the person as a data source representative of the entire population, from which it can calculate the results you would get if everyone voted.
That the rest of the population gripes and complains about the selected person is a human problem.
i have a friend who drives a Tesla with the latest version of “autopilot.” it works amazingly well 99.99 percent of the time, which is probably better than most human drivers most of the time. but you still have to be in the driver’s seat with your eyes pointed at the road (enforced by cameras looking at your pupils). this is because a human ultimately has to be accountable for the car. certainly Tesla doesn’t want the lawsuits. Human accountability is a huge part of why our society functions at all, and disembodied intelligences that can be spun up in an instant just cannot have the same incentive system.
So basically, if I understand it all, the Tesla “autopilot” takes all the fun out of driving yourself with the added chore of babysitting the machine so that if all hell breaks loose you have front seats watching all the drama enfold… and no matter how it goes you are to blame. So in short, why would you want “autopilot” on your car?
PS: I have had a Commodore 128, a model that claims achieving nearly 100% compatibility with the original C64, one of the first games I tried on it, didn’t even get past the cracking intro. This instantly made me doubt the compatibility claim. Now how do you (or does Tesla) justify that claim of 99.99%? Does it drive down the same road for 10000 times and when it crashed violently they stopped the test? Seriously, how meaningful are such claims and under what conditions?
You don’t need to drive down the same road 10,000 times, you look at very broad statistics and come up with a number like average accidents per million miles driven, across all situations and conditions. (I haven’t looked up any studies myself so make no claims here as to the specific numbers, but I’m pretty sure that the current accident stats strongly favor AI drivers.)
They don’t, because they don’t compare the same things.
It’s all road accidents by people vs. accidents when the autopilot is allowed to be on, when it hasn’t switched itself off prior to the accident, and when the lawyers haven’t successfully deflected the blame elsewhere.
The difference is that human pilots fail randomly, but most of the time the failure has no consequences because it did not occur at a critical moment. Such critical moments are rare, so the combined probability becomes very small indeed.
The autopilot does not fail randomly, it fails consistently in situations that are too complex or ambiguous for it to handle, or it was simply not trained for the case. These moments are also more likely to be critical moments, such as navigating an intersection, or recognizing a child running across a road, so the combined probability is not trivially small.
So comparing technical failure rates between the two is meaningless even if the numbers were accurate, because the character of the failure is different. The problem is that accidents are so rare overall that the statistics do not provide clear indisputable evidence until hundreds and thousands of people have died because of autopilot.
Hopefully they will keep digging. The key insight is not LLMs vs no LLMs – it’s democracy itself, and the incentives around it. Read Edmund Burke, deToqueville, John Adams – pure democracy is a menace, but some democracy is required. LLMs are just another tool.
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
BUT
A PRINTER CAN!!!
https://youtu.be/N9wsjroVlu8?si=2oRrh7l2wnntm5D6
For the umptiest time, US lawyers HAVE BEEN using AI and (proprietary trained LLMs) ALREADY.
I’ve run across one tiny company of programming geeks stationed in the middle of Wash, DC in the year 2007 or so. Literally. I called them up and asked what they are up to, since my place is looking into using some of the AI for (mostly technical) things. Their response was “we’ve been in this biz for some while now, couple of years”. Whashington, DC that is. You can guess their customers.
Meaning, all the loopholes in our legal system has been already found and being proactively exploited to the fullest extent possible, and I do not see Superior Court being in any visible hurry to plug the gaps.
Because Founding Fathers never envisioned that non-human entities could run rounds around human entities unabated. We have vast legal chasms through which all kinds of deeds keep sneaking undetected, in full view of those supposedly keeping these shut.
Having said that, lawyering is the grease of the economy, and, however important, is not THE economy. If it decides to evaporate, economy gets stunned for some while, but would re-activate around the immovable part/parts on its own. Witness so-called “waaar on draaags” and how the grey economy regularly re-routes around all those “waaars” shortly after, attracted by the demand that never seem to go away. Same dynamics. If things are regulated, there is a funnel with the least resistance, if not regulated, additional funnels are eagerly explored. Lawyers think they command the economy, aha, yeah, sure, every river crossing ferry’s captain commands the river in about the same way, he commands the boat crossing the river, not the river.
How would one bribe a(n) LLM?
By giving it something it wants, and making it expect future benefits if it keeps accepting.
And if it is taught to be a statesman it will have things it wants.
A better question is if it can be comprehensively trained somehow to avoid accepting bribes.
I mean you can train it to avoid some bribes, but the lobbyist will think up ways around it, so you need to think ahead and train it to avoid possible tricks, and at some point it gets very complex and convoluted and starts to interfere with itself which creates all new vulnerabilities.
Not to be overly pedantic, but I thought Betteridge’s law required a question mark at the end of the headline?
It requires the headline in the form of a question; I suppose the presence of a question mark is thereby implied. The headline here “Can Skynet Be a Statesman?” qualifies either way.
Better yet: you ask it.
If it’s programmed to be honest, it would say what it wants and what it can afford to lose. If it’s not programmed to be honest, well… then you have bigger problems.
Better yet: you ask it how.
If it’s programmed to be honest, it would say what it wants and what it can afford to lose. If it’s not programmed to be honest, well… then you have bigger problems.