techcrunch.com 24.07 - "We're getting a lot of stuff that looks like gold, but it's actually just crap,” said the founder of one security testing firm. AI-generated security vulnerability reports are already having an effect on bug hunting, for better and worse.
So-called AI slop, meaning LLM-generated low-quality images, videos, and text, has taken over the internet in the last couple of years, polluting websites, social media platforms, at least one newspaper, and even real-world events.
The world of cybersecurity is not immune to this problem, either. In the last year, people across the cybersecurity industry have raised concerns about AI slop bug bounty reports, meaning reports that claim to have found vulnerabilities that do not actually exist, because they were created with a large language model that simply made up the vulnerability, and then packaged it into a professional-looking writeup.
“People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, ‘oh no, where is this vulnerability?’,” Vlad Ionescu, the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch.
“It turns out it was just a hallucination all along. The technical details were just made up by the LLM,” said Ionescu.
Ionescu, who used to work at Meta’s red team tasked with hacking the company from the inside, explained that one of the issues is that LLMs are designed to be helpful and give positive responses. “If you ask it for a report, it’s going to give you a report. And then people will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you get into this frustrating situation,” said Ionescu.
“That’s the problem people are running into, is we’re getting a lot of stuff that looks like gold, but it’s actually just crap,” said Ionescu.
Just in the last year, there have been real-world examples of this. Harry Sintonen, a security researcher, revealed that the open source security project Curl received a fake report. “The attacker miscalculated badly,” Sintonen wrote in a post on Mastodon. “Curl can smell AI slop from miles away.”
In response to Sintonen’s post, Benjamin Piouffle of Open Collective, a tech platform for nonprofits, said that they have the same problem: that their inbox is “flooded with AI garbage.”
One open source developer, who maintains the CycloneDX project on GitHub, pulled their bug bounty down entirely earlier this year after receiving “almost entirely AI slop reports.”
The leading bug bounty platforms, which essentially work as intermediaries between bug bounty hackers and companies who are willing to pay and reward them for finding flaws in their products and software, are also seeing a spike in AI-generated reports, TechCrunch has learned.