The CURL Project Drops Bug Bounties Due To AI Slop

Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also known as ‘AI slop’. This has now led the project to suspend its bug bounty program starting February 1, 2026.

Examples of such slop are provided by [Daniel] in a GitHub gist, which covers a wide range of very intimidating-looking vulnerabilities and seemingly clear exploits. Except that none of them are vulnerabilities when actually examined by a knowledgeable developer. Each is a lengthy word salad that an LLM churned out in seconds, yet which takes a human significantly longer to parse before dealing with the typical diatribe from the submitter.

Although there are undoubtedly still valid reports coming in, the truth of the matter is that the ease with which bogus reports can be generated by anyone who has access to an LLM chatbot and some spare time has completely flooded the bug bounty system and is overwhelming the very human developers who have to dig through the proverbial midden to find that one diamond ring.

We have mentioned before how troubled bounty programs are for open source, and how projects like Mesa have already had to fight off AI slop incidents from people with zero understanding of software development.

10 thoughts on “The CURL Project Drops Bug Bounties Due To AI Slop

    1. an optionally refundable fee only for the first single or several submissions by that account. after that all further submissions should be free. you want to penalise those who make new accounts to evade bans, not those who provide valuable input. you only need to ban an account once.

  1. So the ability of partisan entities to create elaborate content with little to no practical real-world use by leveraging the language abilities of AI is overwhelming the limited resources of actual humans to decipher and prioritise them? I would suggest we use an AI to prioritise and manage the submissions. (ok that last sentence is just humor). These are all CLEAR red flags that apply across the board to AI, and the use thereof in almost any field. I’d hope people will wake up to the real potential of AI and balance it with the very real risks.

  2. Although there are undoubtedly still valid reports coming in [..]

    Daniel has said in his blog that starting from 2025, 95% of the HackerOne reports to the curl project were not valid, so yeah, sure that were some valid reports coming in, the majority were not.

Leave a Reply to paulvdhCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.