Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also known as ‘AI slop’. This has now led the project to suspend its bug bounty program starting February 1, 2026.
Examples of such slop are provided by [Daniel] in a GitHub gist, which covers a wide range of very intimidating-looking vulnerabilities and seemingly clear exploits. Except that none of them are vulnerabilities when actually examined by a knowledgeable developer. Each is a lengthy word salad that an LLM churned out in seconds, yet which takes a human significantly longer to parse before dealing with the typical diatribe from the submitter.
Although there are undoubtedly still valid reports coming in, the truth of the matter is that the ease with which bogus reports can be generated by anyone who has access to an LLM chatbot and some spare time has completely flooded the bug bounty system and is overwhelming the very human developers who have to dig through the proverbial midden to find that one diamond ring.
We have mentioned before how troubled bounty programs are for open source, and how projects like Mesa have already had to fight off AI slop incidents from people with zero understanding of software development.

Just add a nominal fee to report a bug and an option, that staff can refund the fee.
Collected fees go to a charity to do good things.
an optionally refundable fee only for the first single or several submissions by that account. after that all further submissions should be free. you want to penalise those who make new accounts to evade bans, not those who provide valuable input. you only need to ban an account once.
So the ability of partisan entities to create elaborate content with little to no practical real-world use by leveraging the language abilities of AI is overwhelming the limited resources of actual humans to decipher and prioritise them? I would suggest we use an AI to prioritise and manage the submissions. (ok that last sentence is just humor). These are all CLEAR red flags that apply across the board to AI, and the use thereof in almost any field. I’d hope people will wake up to the real potential of AI and balance it with the very real risks.
It’s easy to write code that has no obvious errors.
It’s hard to write code that obviously has no errors.
I got as far as reading the first two gist entries, and am in awe of the tolerance of the curl developers.
I’ve followed along as Daniel has posted them on Mastodon. Some of them are worse than you’d ever expect even knowing they’re from AI.
though it seems to wane; e.g. 29
The Lua mailing list, which I’m on, recently got inundated with these — one of them was complaining that a string being passed to strlen() wasn’t being verified as being nul-terminated. Said string came from argv[].
Daniel has said in his blog that starting from 2025, 95% of the HackerOne reports to the curl project were not valid, so yeah, sure that were some valid reports coming in, the majority were not.
Yay! Another of AI’s remarkable achievements.
The cure for cancer can’t be far behind.