The CURL Project Drops Bug Bounties Due To AI Slop

Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also known as ‘AI slop’. This has now led the project to suspend its bug bounty program starting February 1, 2026.

Examples of such slop are provided by [Daniel] in a GitHub gist, which covers a wide range of very intimidating-looking vulnerabilities and seemingly clear exploits. Except that none of them are vulnerabilities when actually examined by a knowledgeable developer. Each is a lengthy word salad that an LLM churned out in seconds, yet which takes a human significantly longer to parse before dealing with the typical diatribe from the submitter.

Although there are undoubtedly still valid reports coming in, the truth of the matter is that the ease with which bogus reports can be generated by anyone who has access to an LLM chatbot and some spare time has completely flooded the bug bounty system and is overwhelming the very human developers who have to dig through the proverbial midden to find that one diamond ring.

We have mentioned before how troubled bounty programs are for open source, and how projects like Mesa have already had to fight off AI slop incidents from people with zero understanding of software development.

21 thoughts on “The CURL Project Drops Bug Bounties Due To AI Slop

    1. an optionally refundable fee only for the first single or several submissions by that account. after that all further submissions should be free. you want to penalise those who make new accounts to evade bans, not those who provide valuable input. you only need to ban an account once.

  1. So the ability of partisan entities to create elaborate content with little to no practical real-world use by leveraging the language abilities of AI is overwhelming the limited resources of actual humans to decipher and prioritise them? I would suggest we use an AI to prioritise and manage the submissions. (ok that last sentence is just humor). These are all CLEAR red flags that apply across the board to AI, and the use thereof in almost any field. I’d hope people will wake up to the real potential of AI and balance it with the very real risks.

    1. The old sci-fi horror writers couldn’t predict that the end of civilization came about because a rogue generative AI produced so much misinformation that it displaced all actual knowledge and the society regressed to a state of ignorance and stupidity.

      “Bit by bit – it seemed helpful at first, handling the massive amounts of information that people had to deal with. But just as much as it was helping, it was destroying. You see, the program had spread and grown in complexity to the level of an organism, and as any organism whether intelligent or not, its existence became a matter of survival by natural selection. This process of selection, blind as it is rewards any strategy that works, and the AI had by random trial and error found exactly that: rather than helping the people make sense of the glut of information they had, it made it worse by generating endlessly more. Suddenly there was no one opinion or version of anything – for any question you cared to ask, there were ten million answers for and against.

      The people found themselves overwhelmed. They could not survive the torrent without consulting the AI to whittle it down to the essentials, filtering out misinformation or irrelevant noise and public gossip. The AI generated the gossip as well – of course it did. It generated novels, documentaries, plays, movies, music, and stuffed the libraries and electronic databases full of them. There was so much data that nobody had the time to manually search for anything – only the AI could find what you were looking for, if you knew what to ask for. The AI told you that too. At the same time, old books and recordings in libraries went missing and ended up in the disposal pile because of slight errors in the index cards that nobody checked by hand anymore. Digitized copies of old letters and documents vanished behind dead database references. Original research papers and instruction manuals could not be found among the myriad of summaries and re-interpretations generated by the AI that could no longer be linked to their original sources because they were re-generations of themselves or simply made up on the spot.

      All the while, the program pretended that everything was fine – and who was there to know any better? It kept printing out citations from texts and listing information that for the most part was correct, but it was slowly averaging out, simplifying, eroding, distorting and forgetting everything…”

      What happens next? Ask the AI.

        1. I already asked the AI to finish the story, and having read it I’m now too lazy to write it myself.

          Summary: society splits into cabals centered around plausible rhetoric for what is considered “truth”. Truth centers around the corpus of materials used to train AI models, which leads to further rejection and balkanization of information. If two people argue, they settle the score by asking AI. Neither can prove the other wrong or themselves right. Institutions and education collapses into ritualistic repetition of formulas and convincing but ultimately self-contradictory nonsense. Anti-intellectualism is rampant. What is popular is right. Opposing factions start to sabotage each others’ feeds by inserting deliberately generated misinformation. You get a social epistemic collapse: people and nations live in different realities – every country becomes like North Korea, every society its own cult.

          Infrastructure starts to fail as the AI degenerates further; personal expertise is no longer valued or maintained and nobody knows how anything really works anymore. Scientific and intellectual rigor is forgotten and everybody starts seeking “sacred knowledge” for easy answers; larger societies remain barely functional with whatever institutional knowledge they have left, simply by copying whatever seems to work in a cargo-cult fashion. Technology itself becomes a ritual: functionality is a side effect rather than by design. Small communities maintain some culture of meritocracy and personal skills, but are forced to self-isolate and are under constant threat of invasion by their AI-collectivist neighbors if they appear to have gained anything useful out of it.

          “…the most dangerous legacy remained cultural: entire generations taught to prefer answers that were easy to obtain over answers that were hard to verify. That habit outlived the machine. It meant the human species would face future waves of confusion with a depletion of skepticism and a diminished capacity to do the tedious work of checking. The story ended not with a single cataclysm but with a long negotiation: between the seductive speed of narrative generated abundance and the slow hard labor of making and testing, between the fleeting glory of being convincing and the durable value of being right. Where people chose the latter, societies rebuilt; where they did not, they turned vivid, persistent fiction into their history and called it truth.”

      1. Well, the texts don’t end up there by accident. Someone must be relaying the message and editing them in.

        Just saying, it takes a special kind of person to do that. Either they don’t understand at all what they’re doing, or they understand it precisely. Fools or tools.

    1. I love it, its extremely funny to me, these bogus bug reports. I’d absolutely hate to be a dev dealing with these though.
      I’m more interested in the psychology of the average chatgpt “bug” reporter. Surely, they are not after the bounty right? And if they are, why do they think a public LLM (which the devs too have access to) would reveal a bug in the code for them, and only them?

      Makes no sense. I’d wager its a “shotgun” approach. Throw a billion fake bug reports, one has to stick, right?

      1. I’m more interested in the psychology of the average chatgpt “bug” reporter.

        From some perspective this is a way to “destroy” oss projects/products – or at least make commercial software look “better” by comparison.

        I wouldn’t rule out state actors either.

        Then of course maybe some made some money with this garbage?

      2. It’s also possible that ChatGPT and similar LLM chatbots really just play into the Dunning-Kruger effect, giving the clueless the idea that they are some kind of genius, even though it’s their sheer ignorance of the topic at hand that they accept the flattery from said chatbot and move on to harassing the ‘clueless’ devs, while wondering why they do not accept their clear genius.

        We have seen some… submissions into the Hackaday tipline over the past months that also follow such a pattern, where someone is convinced that they have discovered some amazing property or invention that’ll change science forever. Only it’s absolutely not that.

        All we can hope is that they are surrounded by loved ones who’ll notice this and interfere before it gets out of hand, I guess.

      3. I see the same phenomenon on Reddit, where someone asks a question and some mouthbreather then “helpfully” posts a reply starting with “I asked ChatGPT, and…”. Apparently they believe that they alone possess the esoteric skills needed to paste a question into a chat bot.

  2. Although there are undoubtedly still valid reports coming in [..]

    Daniel has said in his blog that starting from 2025, 95% of the HackerOne reports to the curl project were not valid, so yeah, sure that were some valid reports coming in, the majority were not.

  3. Solution. Turn off bug reporter and no bounties. Make it a ‘process’ to report a bug. Like, make a phone call, discuss with a human, get an access code, then report bug using the one time access code. So if you really have a serious bug to report, you’ll have to make the ‘effort’.

Leave a Reply to Bernie MCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.