So first off, go take a look at this curl bug report. It’s a 8.6 severity security problem, a buffer overflow in websockets. Potentially a really bad one. But, it’s bogus. Yes, a strcpy
call can be dangerous, if there aren’t proper length checks. This code has pretty robust length checks. There just doesn’t seem to be a vulnerability here.
OK, so let’s jump to the punch line. This is a bug report that was generated with one of the Large Language Models (LLMs) like Google Bard or ChatGPT. And it shouldn’t be a surprise. There are some big bug bounties that are paid out, so naturally people are trying to leverage AI to score those bounties. But as [Daniel Stenberg] point out, LLMs are not actually AI, and the I in LLM stands for intelligence.
There have always been vulnerability reports of dubious quality, sent by people that either don’t understand how vulnerability research works, or are willing to waste maintainer time by sending in raw vulnerability scanner output without putting in any real effort. What LLMs do is provide an illusion of competence that takes longer for a maintainer to wade through before realizing that the claim is bogus. [Daniel] is more charitable than I might be, suggesting that LLMs may help with communicating real issues through language barriers. But still, this suggests that the long term solution may be “simply” detecting LLM-generated reports, and marking them as spam. Continue reading “This Week In Security: AI Is Terrible, Ransomware Wrenches, And Airdrop”