The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 8 May, 12:03 AM UTC
2 Sources
[1]
Open source project curl is sick of users submitting "AI slop" vulnerabilities
"A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time," wrote Daniel Stenberg, original author and lead of the curl project, on LinkedIn this week. Curl (cURL in some realms), which turned 25 years old in 2023, is an essential command-line tool and library for interacting with Internet resources. The open source project receives bug reports and security issues through many channels, including HackerOne, a reporting service that helps companies manage vulnerability reporting. HackerOne has fervently taken to AI tools in recent years. "One platform, dual force: Human minds + AI power," the firm's home page reads. Stenberg, saying that he's "had it" and is "putting my foot down on this craziness," suggested that every suspected AI-generated HackerOne report will have its reporter asked to verify if they used AI to find the problem or generate the submission. If a report is deemed "AI slop," the reporter will be banned. "We still have not seen a single valid security report done with AI help," Stenberg wrote. One report from May 4 that Stenberg wrote "pushed me over the limit" suggested a "novel exploit leveraging stream dependency cycles in the HTTP/3 protocol stack." Stream dependency mishandling, where one aspect of a program waits for the output of another aspect, can lead to malicious data injection, race conditions and crashes, and other issues. The report in question suggests this could leave curl, which is HTTP/3-capable, vulnerable to exploits up to and including remote code execution. But as curl staff point out, the "malicious server setup" patch file submitted did not apply to the latest versions of a Python tool in question. Asked about this, the original submitter responded in a strangely prompt-like fashion, answering questions not asked by curl staff ("What is a Cyclic Dependency?") and included what seem like basic instructions on how to use the git tool to apply a new patch. But as curl staff note, the responder did not provide the requested new patch file, cited functions that do not exist in the underlying libraries, and suggested hardening tactics for utilities other than curl. Ars has reached out to HackerOne for comment and will update this post if we get a response. "More tools to strike down this behavior" In an interview with Ars, Stenberg said he was glad his post -- which generated 200 comments and nearly 400 reposts as of Wednesday morning -- was getting around. "I'm super happy that the issue [is getting] attention so that possibly we can do something about it [and] educate the audience that this is the state of things," Stenberg said. "LLMs cannot find security problems, at least not like they are being used here." This week had seen four such misguided, obviously AI-generated vulnerability reports seemingly seeking either reputation or bug bounty funds, Stenberg said. "One way you can tell is it's always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points ... an ordinary human never does it like that in their first writing," he said. Some AI reports are easier to spot than others. One accidentally pasted their prompt into the report, Stenberg said, "and he ended it with, 'and make it sound alarming.'" Stenberg said he had "talked to [HackerOne] before about this" and has reached out to the service this week. "I would like them to do something, something stronger, to act on this. I would like help from them to make the infrastructure around [AI tools] better and give us more tools to strike down this behavior," he said. In the comments of his post, Stenberg, trading comments with Tobias Heldt of open source security firm XOR, suggested that bug bounty programs could potentially use "existing networks and infrastructure." Security reporters paying a bond to have a report reviewed "could be one way to filter signals and reduce noise," Heldt said. Elsewhere, Stenberg said that while AI reports are "not drowning us, [the] trend is not looking good." Stenberg has previously blogged on his own site about AI-generated vulnerability reports, with more details on what they look like and get wrong. Seth Larson, security developer-in-residence at the Python Software Foundation, added to Stenberg's findings with his own examples and suggested actions, as noted by The Register. "If this is happening to a handful of projects that I have visibility for, then I suspect that this is happening on a large scale to open source projects," Larson wrote in December. "This is a very concerning trend."
[2]
Curl takes action against time-wasting AI bug reports
Curl project founder Daniel Stenberg is fed up with of the deluge of AI-generated "slop" bug reports and recently introduced a checkbox to screen low-effort submissions that are draining maintainers' time. Stenberg said the amount of time it takes project maintainers to triage each AI-assisted vulnerability report made via HackerOne, only for them to be deemed invalid, is tantamount to a DDoS attack on the project. Citing a specific recent report that "pushed [him] over the limit," Stenberg said via LinkedIn: "That's it. I've had it. I'm putting my foot down on this craziness." From now on, every HackerOne report claiming to have found a bug in curl, a command-line tool and library for transferring data with URLs, must disclose whether AI was used to generate the submission. If selected, the bug reporter can expect a barrage of follow-up questions demanding a stream of proof that the bug is genuine before the curl team spends time on verifying it. "We now ban every reporter instantly who submits reports we deem AI slop," Stenberg added. "A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time." He went on to say that the project has never received a single valid bug report that was generated using AI, and their rate is increasing. "These kinds of reports did not exist at all a few years ago, and the rate seems to be increasing," Stenberg said, replying to a follower. "Still not drowning us, but the trend is not looking good." These concerns are not new. Python's Seth Larson also raised concerns about these AI slop reports back in December, saying that responding to them is expensive and time-consuming because on face value, they seem legitimate and must be investigated further by trained eyes before confirming that they are, in fact, bogus. "Security reports that waste maintainers' time result in confusion, stress, frustration, and to top it off, a sense of isolation due to the secretive nature of security reports," Larson wrote. "All of these feelings can add to burnout of likely highly trusted contributors to open source projects. "In many ways, these low-quality reports should be treated as if they are malicious. Even if this is not their intent, the outcome is maintainers that are burnt out and more averse to legitimate security work." We now ban every reporter instantly who submits reports we deem AI slop ... If we could, we would charge them for this waste of our time Stenberg's decision to add an AI filter to HackerOne reports follows years of frustration about the practice. He raised the issue as far back as January 2024, saying reports made with Google Bard, for example, as Gemini was called back then, were "crap" but better crap. The comment referred to the same point Larson made almost a year later - that AI reports look legitimate at first, but take time to reveal issues like hallucinations. The issue is especially damaging for open source software projects like curl and Python, which largely depend on the work of a small number of unpaid volunteer specialists to help improve them. Developers come and go with these projects, staying for a short time, often to help fix a bug they reported, or some other feature, before leaving. At the time of writing, curl's website states that at least 3,379 people have individually contributed to the project since Stenberg founded it in 1998. Curl offers bounty rewards of up to $9,200 for the discovery and report of a critical vulnerability in the project, and has paid $16,300 in rewards since 2019. According to its HackerOne page, it received 24 reports in the previous 90 days, none of which have led to payouts, and as Stenberg said in his LinkedIn post, none of the AI-assisted reports made in the last six years have actually discovered a genuine bug. Generative AI tools have allowed low-skilled individuals with an awareness of bug bounty programs to quickly file reports based on AI-generated content in the hope they can cash in on the rewards they offer. However, Stenberg said that it is not just the newbies and grifters using AI to chance their luck on a bounty program - those with a degree of reputation are also getting in on the act. The report that pushed the project founder over the edge was made two days ago and was a textbook AI-generated submission. It was pitched as "a novel exploit leveraging stream dependency cycles in the HTTP/3 protocol stack was discovered, resulting in memory corruption and potential denial-of-service or remote code execution scenarios." Ultimately, though, it was found to refer to nonexistent functions. Stenberg said: "What fooled me for a short while was that it sounded almost plausible, combined with the fact that the reporter actually had proper 'reputation' (meaning that this person has reported and have had many previous reports vetted as fine). Plus, of course, that we were preoccupied over the day with the annual curl up meeting." ®
Share
Share
Copy Link
The open-source curl project implements measures to combat the increasing influx of AI-generated vulnerability reports, which are draining maintainers' time and resources.
The open-source curl project, a crucial tool for internet data transfer, is grappling with an unprecedented challenge: a flood of AI-generated vulnerability reports. Daniel Stenberg, the project's founder and lead developer, has taken a firm stance against what he terms "AI slop" submissions, implementing new measures to combat this growing issue 1.
Stenberg describes the situation as effectively being "DDoSed" by low-quality, AI-generated reports submitted through platforms like HackerOne. These reports, while appearing legitimate at first glance, often contain hallucinations, reference non-existent functions, and waste valuable time of project maintainers 2.
The curl project has never received a valid security report generated with AI assistance, yet the frequency of such submissions is increasing. Stenberg notes, "These kinds of reports did not exist at all a few years ago, and the rate seems to be increasing" 2.
This issue extends beyond curl, affecting various open-source projects. Seth Larson, security developer-in-residence at the Python Software Foundation, has also raised concerns about the trend. He emphasizes that responding to these reports is expensive and time-consuming, potentially leading to burnout among trusted contributors 1.
To address this challenge, the curl project has introduced several measures:
The situation highlights a growing tension between AI-assisted bug hunting and the reality of open-source project maintenance. While AI tools promise to enhance security research, their current implementation appears to be creating more problems than solutions for projects like curl.
Stenberg suggests that bug bounty programs might need to evolve, potentially implementing systems like requiring a bond from reporters to filter out low-quality submissions 1.
The issue has garnered attention within the tech community, with Stenberg's LinkedIn post on the matter generating significant engagement. He hopes this attention will lead to broader awareness and potential solutions to the problem 1.
As AI tools continue to evolve, the challenge of balancing their use in security research with the need for high-quality, human-verified reports remains a critical issue for open-source projects and the wider software security ecosystem.
Reference
[2]
Open source project maintainers are facing a surge in low-quality, AI-generated bug reports, leading to wasted time and resources. This trend is causing concern among developers and raising questions about the impact of AI on software development.
4 Sources
4 Sources
Researchers uncover a significant security risk in AI-assisted coding: 'package hallucinations' where AI models suggest non-existent software packages, potentially leading to a new type of supply chain attack called 'slopsquatting'.
5 Sources
5 Sources
Google's AI-enhanced fuzzing tool, OSS-Fuzz, has discovered 26 vulnerabilities in open-source projects, including a long-standing flaw in OpenSSL. This breakthrough demonstrates the potential of AI in automated bug discovery and software security.
4 Sources
4 Sources
Experts discuss the potential of AI in bolstering cybersecurity defenses. While AI shows promise in detecting threats, concerns about its dual-use nature and the need for human oversight persist.
2 Sources
2 Sources
Recent research reveals that AI-generated code frequently contains references to non-existent packages, potentially opening doors for malicious actors to exploit these "hallucinations" in supply chain attacks.
4 Sources
4 Sources