Well, it’s official — AI is ruining everything. That’s not exactly news, but learning that LLMs are apparently being used to write scientific papers is a bit alarming, and Andrew Gray, a librarian at University College London, has the receipts. He looked at a cross-section of scholarly papers from 2023 in search of certain words known to show up more often in LLM-generated text, like “commendable”, “intricate”, or “meticulous”. Most of the words seem to have a generally positive tone and feel a little fancier than everyday speech; one rarely uses “lucidly” or “noteworthy” unless you’re trying to sound smart, after all. He found increases in the frequency of appearance of these and other keywords in 2023 compared to 2022, when ChatGPT wasn’t widely available.
It doesn’t always take a statistical analysis of word distributions to detect the footprints of an LLM, though. The article includes examples of text copied and pasted directly from the chatbot, without any attempt at editing or even basic proofreading. How not only the authors of the papers but also the journal editors and reviewers managed not to pick up an obvious chatbot error message that had been copy-pasted is hard to imagine. And let’s not even get started on the Midjourney-generated diagram of a monstrously well-endowed rat that was used to illustrate an article (since retracted) on spermatogenesis, complete with nonsensical captions and callouts to non-existent body parts. This is why we can’t have nice things.
Speaking of nice things, did you know that the largest manufacturer of vintage lamps in history is a little company called “Underwriter’s Laboratory”? At least it seems that way looking at eBay, where sellers listing old lamps often claim the manufacturer is the storied safety standards organization. We suppose it makes sense if the only label on an old lamp is the UL listing label and you had no idea what UL is. But really, that’s the least of the problems with some of these listings. “Vintage” is a stretch for a green banker’s lamp with a polarized plug that was clearly made sometime in the last 30 years.
Switching gears a bit, it’s one thing to know that everything you do online is tracked, but it’s quite another thing to find out exactly how much information is shooting back and forth between your computer and the Hive Mind. That’s what Bert Hubert built Tracker Beeper to do, and it’s a little terrifying. The tool emits a short beep every time your computer sends off a bit of data to a tracker. It started just monitoring data going to Google, which was alarming enough. The tool was later modified to include most of the trackers we’re likely to come across in our daily travels, and wow! It sounds like a Geiger counter when the tube gets saturated by a highly active source. Probably just as dangerous, too.
Heads up — the HOPE conference is gearing up. Hackers on Planet Earth XV will be held July 12-14 on the campus of St. John’s University in Queens, New York. The “Call for Participation” is now open; it’s always nice to see a big Hackaday contingent at HOPE, so make sure you get your proposals for talks, workshops, or panels together soon.
And finally, what should you do if the FCC comes knocking at your door? It’s not just an academic question; the US Federal Communications Commission does a lot of field investigation, and if you do any kind of RF experimentation, there’s a non-zero chance that you’ll make some kind of spurious emission that gets their attention. Josh from Ham Radio Crash Course dropped a video that addresses the dreaded knock. TL;DW — come back with a warrant. But it’s more complicated than that, as illustrated by a hilarious IRL account of one such encounter. We won’t spoil the surprise, but suffice it to say that if your house is under the approach to a major international airport, you probably want to be extra careful with anything radio-related.
“learning that LLMs are apparently being used to write scientific papers is a bit alarming”
I am alarmed that you find it alarming! :)
I worry that if you’re not using an LLM to word your paper you will be at a disadvantage since a significant number of people are going to be use it to improve their papers. Same, maybe even more so, with grant applications.
The reason it’s being used is because the vast majority of these papers are filler and academic make-work with no real purpose other than grant grift and everyone knows it. It’s a perfect job for AI. If we insist on higher education being pumped full of this volume of mediocrity and becoming a quantity-driven industry, then this is the result. Putting more people in college doesn’t make people smarter, it makes college dumber.
A lot of institutions rank their staff by the number of papers they have published per year. Which indirectly is factored into their take home pay, so there is incentives to bulk up numbers.
This may be a good thing, because it exposes how crappy the crappy publishers like Frontiers actually are. Not catching those garbage images makes it pretty obvious that even the editor didn’t take a look at it, let alone the reviewers.
To be clear, I don’t think all the uses of AI applied to writing papers or grants is ‘crappy’ garbage. I think people use it to clean up there draft work for publication. The actual content may still be good and valid, even if an LLM was used to wrap it up. There maybe more system gaming uses as well, such as “Rewrite this paper in a style that could get published in Nature…”, to increase publication chances. Also, given the competition to get published, if your competitors using AI, then you are potentially loosing out by not using it.
This may have long lasting and unforeseen consequences on science and language.
Modern day autocorrect. Now all our posts can be in top form.
This is based on the premise that using LLMs to write or illustrate your papers has a benefit (easy, cheap), which ALSO does not contradict the aim of the paper writer, field of study, or science itself (inaccurate, creates the illusion of meaning rather than meaning derived from logic or reasoning).
If you are “losing” against the competition by abstaining from using a piece of software which almost always removes meaning and value (in a scientific and accurate communication sense), something is wrong with the culture and environment, not an individual’s methods. It’s like saying an electrician is losing against the competition for not using cheaper aluminum wire instead of paying tons for copper.
LLMs are not scifi “AI” in a reasoning sense, and they’re not capable of preserving original meaning when they make or remake something, at least not to the modest degree demanded in fields of study where specificity and accuracy actually matters. That’s why people find the reckless use the technology “alarming”.
” if you do any kind of RF experimentation, there’s a non-zero chance that you’ll make some kind of spurious emission that gets their attention”
I think it’s safe to say it’s zero in my case. I live in the UK.
you guys might not have the FCC, but you have OFCOM, where you need a radio license to listen/watch live (over the air) TV.
here in the states you can listen/watch to almost every frequency you want ( except cellphone freq) free of charge, with no registration.
I suppose the TV detector trucks are no longer driving around eh
Nice reference
(What’s the UK equivalent of the FCC called?)
UK has FCC. Most of Europe are under CE though.
Don’t be an idiot, The F stands for ‘federal’ and it’s a US agency.
In the UK they have ofcom (Office of Communications) and in the EU each country has its own agency (sometimes several); although of course across the EU as well as across the world there are general agencies which are needed for aligning things since radio signals overlap.
Got you a link: https://en.wikipedia.org/wiki/List_of_telecommunications_regulatory_bodies
>I think it’s safe to say it’s zero in my case. I live in the UK.
Have some ambition! Somebody needs to use all the excess wind power on stormy days – might as well be you.
What is with all the technophobic AI articles lately?
You can tell what a society currently fears by it’s growth in all forms of media.
What elite fashions of fear are, perhaps. The amount of fear a person has of AI is proportional to how much their job resembles AI (vomiting out vast amounts of pretty-sounding words which add up to nothing). Yes I realize exactly the joke I just walked into
“A perverse incentive is an incentive that has an unintended and undesirable result that is contrary to the intentions of its designers. ”
Cf. Promotion & Tenure Committees: “they can’t read but they can count”
Simple, really.
We live in a society where AI is and will increasingly be used not to improve things but to do EVERYTHING. People don’t want to work, think, or do anything else that might burn the slightest bit of energy or make one break a mental sweat. It’s already being used to generate bullshit spamvertisements (using celebrity voices impersonated by AI) offering everything from free government money to mobile solitaire games that promise you hundreds of dollars a day in winnings.
Unfortunately, thinking has become too hard for the masses and, well, the computers are smarter, so . . . why not?
We are doomed.
Witness the, 3 times per commercial break, running of ads for a company touting an AI program for answering every public inquiery, rather than an actual employe or service tech.
If you think just the phone trees are infuriating now, wait untill you can no longer make any sort of human contact for your warranty or replacement bits needs.
But they can cut payrolls, so all’s good right? And I guess the made-redundant people can go be artisan tiktok influencers.
I think the good of AI might all get weeded out as part of enforced censorship.
Small anectdote: Years ago I read about a US city police departement (Chicago?) that used AI, and that AI identified a person as statistically being on a path that would lead to him being dead.
So the cops went to the guy and told him, even though his endeavours were of course of a criminal/gang related nature, but you want to give a person a chance.
Now that kind of thing is of course profiling and because AI can do that wrong and then be in a system that abuses the results they have to disallow it. But if you use it merely as an aid to do good you theoretically use it. But humans eh.. hard to get to on board of any doing-good-while-not-bullying-agenda in numbers. Flawed you might say.
My Google Pixel 8 Pro uses AI to infuriate and annoy me. It works.
How many times do I have to tell my Phone not to save my pictures in their cloud? Or how many times do I have to dismiss the Smart Intelligence Advertisement for a Feature I’m not ever going to use to continue to do what I was doing? If AI was really intelligent it would realize I don’t use it.
As for submitting a paper with AI content without proofreading it good luck with that. Eventually the AI Editor will figure that out and remove your paper from their publication. Or write it, submit it, and publish it without your knowledge. Then it could read it for us or fill our inbox with it. Tik Toc.
AI is being used by people to do the things they don’t want to do. Science guy want to do science that is why he has assistants to do the boring writing stuff down. Those assistants are being replaced by AI and no surprise science guy is a terrible proofreader. It is not that he doesn’t want to work he doesn’t want to do that work. Everyone wants technology to make their work easier.
Getting the love of the FCC for having fun with a $20 chinese radio must be specially galling.