New technologies bring with them the threat of change. AI tools are one of the latest such developments. But as is often the case, when technological threats show up, they end up looking awfully human.
Recently, [E. M. Wolkovich] submitted a scientific paper for review that — to her surprise — was declared “obviously” the work of ChatGPT. No part of that was true. Like most people, [E. M. Wolkovich] finds writing a somewhat difficult process. Her paper represents a lot of time and effort. But despite zero evidence, this casual accusation of fraud in a scientific context was just sort of… accepted.
There are several reasons this is concerning. One is that, in principle, the scientific community wouldn’t dream of leveling an accusation of fraud like data manipulation without evidence. But a reviewer had no qualms about casually claiming [Wolkovich]’s writing wasn’t hers, effectively calling her a liar. Worse, at the editorial level, this baseless accusation was accepted and passed along with vague agreement instead of any sort of pushback.
Showing Your Work Isn’t Enough
Interestingly, [Wolkovich] writes everything in plain text using the LaTeX typesetting system, hosted on GitHub, complete with change commits. That means she could easily show her entire change history, from outline to finished manuscript, which should be enough to convince just about anyone that she isn’t a chatbot.
But pondering this raises a very good question: is [Wolkovich] having to prove she isn’t a chatbot a desirable outcome of this situation? We don’t think it is, nor is this an idle question. We’ve seen how even when an artist can present their full workflow to prove an AI didn’t make their art, enough doubt is sown by the accusation to poison the proceedings (not to mention greatly demoralizing the creator in the process.)
Better Standards Would Help
[Wolkovich] uses this opportunity to reflect on and share what this situation indicates about useful change. Now that AI tools exist, guidelines that acknowledge them should be created. Explicit standards about when and how AI tools can be used in the writing process, how those tools should be acknowledged if used, and a process to handle accusations of misuse would all be positive changes.
Because as it stands, it’s hard to see [Wolkovich]’s experience as anything other than an illustration of how a scientific community’s submission and review process was corrupted not by undeclared or thoughtless use of AI but by the simple fact that such tools exist. This seems like both a problem that will only get worse with time (right now, it is fairly easy to detect chatbots) and one that will not solve itself.