Has anyone noticed that news stories have gotten shorter and pithier over the past few decades, sometimes seeming like summaries of what you used to peruse? In spite of that, huge numbers of people are relying on large language model (LLM) “AI” tools to get their news in the form of summaries. According to a study by the BBC and European Broadcasting Union, 47% of people find news summaries helpful. Over a third of Britons say they trust LLM summaries, and they probably ought not to, according to the beeb and co.
It’s a problem we’ve discussed before: as OpenAI researchers themselves admit, hallucinations are unavoidable. This more recent BBC-led study took a microscope to LLM summaries in particular, to find out how often and how badly they were tainted by hallucination.
Not all of those errors were considered a big deal, but in 20% of cases (on average) there were “major issues”–though that’s more-or-less independent of which model was being used. If there’s good news here, it’s that those numbers are better than they were when the beeb last performed this exercise earlier in the year. The whole report is worth reading if you’re a toaster-lover interested in the state of the art. (Especially if you want to see if this human-produced summary works better than an LLM-derived one.) If you’re a luddite, by contrast, you can rest easy that your instincts not to trust clanks remains reasonable… for now.
Either way, for the moment, it might be best to restrict the LLM to game dialog, and leave the news to totally-trustworthy humans who never err.

I never trust clanks
That’s basically the plot for Deus Ex: Human Revolution :’)
I thought the plot was planned obsolescence
Archie Bunker thought Cronkite to be a “pinko”.
Well he was. But of a much higher caliber than the ones we get today. We’ve forgotten how to create a Cronkite. Now you get a chatbot or a Vaush, both are pinkos, and the chatbot is vastly preferable.
A chatbot are vastly preferable to most breadtubers. I don’t know what it is with opinionated YouTubers across the political spectrum being such complete trash. Asmongold and his dumpster-apartment, Destiny’s a spousal abuser, Vaush has the horse-stuff controversy, DemonMama houses a groomer, and now we find Hasan Piker electro-shocks his dog. I really hope it never turns out that West Side Tyler has closet skeletons, or I’m just going to give up on the Internet and move to New Zealand to farm sheep.
Given that the underlying reality is pretty discouraging, and the “news media” coverage of it is of such a low quality, who wouldn’t rather have 50% of their news media consumption based on fantasies instead? It’s an escape.
Would you trust a German newsman named Walter Disease?
OMG, yes Walter Krankheit, lol. Ok ok, close, very close.
LLM summaries are terrible because they ignore nuance like it was going out of style. As if the state of journalism wasn’t bad enough for them to include LLMs…
I don’t watch the news so this doesn’t affect me personally. Not on TV, don’t read the newspaper, not on the internet. Life still goes on, just without an artificially induced impending sense of doom…
News has unfortunately been absolute trash for more than a generation, correct. Journalists kind of ride on the prestige of people from half a century ago… As if the trade still had any honor or value to it.
I hate all the AI slop search results. I might search for “how to change a light bulb”, and the results always start with things like “why should you change a light bulb” and lists many things related to light bulbs, but doesn’t actually say how to change one. It’s so many words yet says so little.
Not to mention “Find light bulbs in your area”, and “Cheap light bulbs shipped free”
Or the amazon search result for lite bulbs
That seems to be particular to Google. I’ve seen Copilot file the serial numbers off good tutorials with reasonable efficiency.
Now do the same analysis for news reporters.
Hard to tell how much is hallucination and how much deliberate BS.
For the BBC I’d say 40 to 60 percent deliberate and an additional 20ish percent ‘hallucination’.
That’s without them using AI, I’m sure they use it though, they all do.
This is great, we will have finally reached a point where nothing can be believed. Maybe only then would people be interested in the truth.
We’ve already run that strat a few times in history.. Unfortunately it only leads people to be distrusting dismissive of everything and in general they stop believing that something like truth was ever possible.
As much as I am not a fan of the prevailing AI LLM equivalency cash grab, I have to concede that, today, Grok or ChatGPT could potentially, if all goes well, provide me with more useful news than Walter Cronkite can.
Though, I would likely consider both sources to be of roughly equivalent levels of reliability.
I got ‘news’ for you, he died in 2009.
I have to point out that these days deceased people release new books, and they are really ‘revealing’ the news (and youtubers) will tell you. But somehow there seems to be some doubt in me about their reveals.
Walter Cronkite has not made an error or reported a misleading story in well over a decade. Can you say the same about any LLM out there?
I mean being dead for the last 16 years helps him with that record, but point taken.
Warning – this article was written by an LLM!