Despite a few high-profile cases in recent years with lawyers getting caught using LLM-generated documents and facing disciplinary action due to this, it would seem that this is not deterring many other lawyers from following them off this particular cliff, per reporting from NPR.
We reported back in the innocent days of 2023 about the amusing case of Robert Mata v. Avianca, Inc. In this case, the plaintiff’s lawyer decided to have ChatGPT ‘assist’ with the legal filing, which ended up being filled with non-existent cases being cited, despite the chatbot’s assurance that these were all real cases. Now it would seem that this blind trust in cases cited by LLM chatbots is becoming the rule, rather than the exception.
Last year a record number of lawyers fell into the same trap, with many lawyers getting fined thousands of dollars for confabulated case citations. According to a researcher at the business school HEC Paris, who is keeping a worldwide tally, the count so far is 1,200, of which 800 originate from US courts.
Unsurprisingly, penalties are also increasing in severity, with monetary penalties passing the $100,000 and some courts demanding that any use of ‘AI’ be declared up-front. Whether or not the popularity of LLM chatbots among US lawyers is simply due to the massive caseload that digging through cases in Common Law legal systems entails has not yet been addressed, but that undesirable shortcuts are being taken is undeniable.
Remember that it’s easy to point and laugh, but the next case could involve the lawyer handling your delicate situation.

I’d dare say that anybody who things this is “amusing” has absolutely no clue about the impact this has on the live of the affected human beings, and more general on society as a whole.
Thinks, with a k. and life with an f. etc…
Sorry for the typo’s.
It’s what you get for using an AI to write the response. (Kidding! Or “kitting”, or … something.)
Lawyers live in a surreal world that isn’t quite human because it’s an entirely different logic, Involving AI twists it to a whole new skew plane.
Those cases were the ones that were caught, how many fake AI cases slipped past them? I am wondering how many innocent people are in jail because some idiot lawyer relied on AI and no one caught the fake part.
I’d hardly consider lawyers to be human beings. Any idiot doing this should be disbarred and criminally prosecuted.
Just wait until judges are LLMs for entertainment purposes only. Because at the highest levels that’s what’s happening with surveillance and even military operations.
Horrifying, depressing (etc) and amusing are not mutually exclusive. Finding the “gallows humour” in the tragedies of life is very human, the whole you have to laugh or else your cry…
Doesn’t mean you approve, but it is very funny to me that these people are trying to charge a fortune ‘because we are indispensable – we are too smart, experts in our field, know all the case law, do our research etc’ and simultaneously trying to replace themselves with a garbage processing machine… ‘Smart’ people should know better!
The legal system has too much impact in general. If lawyers think they can do a better job using AI, what does that say about their previous work? Not to mention the fact that a jury is just a gang, and a judge is just a man in a dress. The whole thing is a farce.
Why does a man in a flowing robe threaten your masculinity so much? Tell us you’re mad about paying child support without saying you’re mad about paying child support…
There exists art and media which resemble a great many things but actually possess little to no meaning outside of aesthetic references. LLM outputs have this quality amongst its median outputs. Not particularly useful imo, to be informal, real-esque, its just another form of dishonesty to use AI as a shortcut in a professional field. A false bibliography is just plagiarism as I understand it. Contempt of court is a very real and tangible outcome for utilizing too-good-to-be-true fallacies of AI use.
LLM as a legal resource is useful for writing a tv script or novel about lawyers and you need the techno-babble to sound a certain way.
I think the robots recreating Escher’s Drawing Hands is the site’s stock illustration for articles about AI, right? I saw it on at least one other recent article. Should there not be a credit on it? Or, shock/horror, was it generated..?
Most of the stock illustrations you see on this site including this one. Were made by Joe Kim. A long-time artist that is credited as the Art-Director and Pixel Pusher of this site on the “About” page. You will find that the illustrations have timestamps anywhere between recent years to well over a decade ago. This one debuted in 2023 as part of a ChatGPT article and has since seen regular use along with the Matrix Sentinel with cat features image.
I cannot comment on why they do not list credits. Would assume it is some algorithm thing or a past agreement.
Thanks!
Todays LLMs lean heavily on the internet, and the internet is full of content generated by halfwits, idiots, and people/organizations spreading their own versions of truth that are far removed from reality.
I suspect we are only a short time away from seeing the widescale implementation of Narrow Knowledgebase LLMs being deployed in many areas where todays generalized LLMs are flailing and failing.
A model trained ONLY on legal cases, laws, and data relevant to the law.
Or a model trained ONLY on medical cases, pharmaceutical studies, and biological science.
LLM are agnostic to subjects. The training must possess sufficient data to reconstruct a few things for example:
Grammar
Syntax
Writing styles
For science, we would want to see the most prominent training data correctly output by the LLM when appropriate. Benchmark questions are good for this. However what if we modify the benchmark to be unpassable? This hones in on hallucination, the model might fake knowledge of nonreal data.
Focussing on science only, would simply give us median outputs that resemble the training texts. This LLM would be bone dry in dialogue, and might be unable to correctly parse courtesy or informality. It would only discuss those subjects, or emergent hallucinated subjects.
Although we do see subjective improvements in various metrics, some more arbitrary than others, when LLMs are trained on data that more closely matches the domain inference is performed on, there is still a mathematical inevitability – “hallucinations”. There is no way to escape them with the current paradigm, it’s proven, and no guarantees that I have seen thus far that their incidence rate can be lowered to any application requiring truthfulness or rigor. There is another inherent flaw in that the law like most things is dynamic. A model 2 months out of date, especially in the current climate, could deliver terrible(for the client) guidance. I could say more but wont. In matters involving people, it’s likely for the best that people are represented not approximated.
i think you’re right that we’re going to see a lot of narrowing / specialization in the AI space soon. But FWIW, just limiting the training dataset won’t be very effective for this goal. The current LLMs don’t “read” their input, instead they characterize it, essentially creating a latent pastiche in their node weights. You give it a prompt and the pastiche becomes actualized. But there’s no guarantee it will represent any given input faithfully, and in fact the odds are rather against it.
When we read, we generate a latent pastiche (our intuitive knowledge of language and everything) just like the robot does. But we also partially remember it, and have an internal dialogue with/about it, and also partially remember that dialogue! These features will eventually show up in computational models but for now a training dataset doesn’t do anything but define the texture of hallucinations it will regurgitate later. The output is still hallucinations all the way through.
One would think checking the cited cases for existance would be the easiest thing to do after letting LLMs BSing the documents. If you are that lazy to not check even that, then why are you a lawyer?
Because they have no morals or integrity and like lots of money?
This takes time and they use LLM to save time. Meanwhile they don’t understand the algorithm is not intelligent. It offloads ton of work and those “mistakes” are calculated risk. Penalties for this should be much higher with loosing the license being very possible. But remember – new law will be interpreted by the same lawyers and even now they can’t help but cover each others (lower) back.
I think the problem is also marketing – it is marketed as AI. With every new version comes promise to make no mistakes or much less. Most people don’t understand that this particular algorithm is not intelligent. Large Language Model wouldn’t sell that good.
I think it’s a good time to remind Joseph Weizenbaum and his Eliza – so many people fell for a simple trick.
So far, AI seems to have the biggest impact in areas where people are doing sloppy work already.
The AI-verse is really just an updated Spamiverse, low resolution intrusive media
Utterly hilarious using an image that violated copyright on this post.
In some countries in the world, the image is in the public domain. That is not true for the USA and will not be until 2042.
Nice touch with the AI errors, too
Reacher use only.
If it is doing more than giving you references that point YOU where to read, it is being used wrong.
Screw fines.
If they don’t want to work for those billable hours, disbar them.
I admire lawers, they have spent entire careers finding new ways to be lazy sacks of shit, and that takes God honest effort
This reminds me of an article in Nature that came out recently about how thousands of scientific papers are getting rejected because researchers are now using AI to help write their papers.
I’ve just described entire logic right there, if penalties are smaller than the potential reward, they no longer serve as the deterrent they supposed to be, and will be written off as “operating expenses”.
everyone ought to study Marx more :)
The development of productive forces created class, and now class has developed until it is separate from production! Law school is a ticket to a class position, rather than a tool to create new production. You can see this evolution in almost every career field here. AI is just a footnote to the growing distance between class and production.
…and I bet Lawyers will still charge full rate