Curated by THEOUTPOST
On Mon, 20 Jan, 4:01 PM UTC
4 Sources
[1]
ChatGPT Could Theoretically Carry Out DDoS Attacks in the Wrong Hands
You Can Now Set ChatGPT to Complete Tasks, and This Is How to Use It Properly As the world comes to terms with AI assistants, we're learning all the interesting things they can do, for better and for worse. One example that lands firmly in the latter camp is a theoretical attack that forces ChatGPT to DDoS a chosen website, albeit it's not a "real" threat just yet. ChatGPT Can Perform a Limitless Number of Hyperlink Connections in a Single Request As reported by Silicon Angle, a researcher named Benjamin Flesch discovered that ChatGPT doesn't have a limit to how many links it accesses when generating a response. Not only that, but the service doesn't check if the URLs it's accessing are duplicates of the websites it has already checked. The end result is a theoretical attack where a bad actor gets ChatGPT to connect to the same websites thousands of times per query: The vulnerability can be exploited to overwhelm any website a malicious user wants to target. By including thousands of hyperlinks in a single request, an attacker can cause the OpenAI servers to generate a massive volume of HTTP requests to the victim's website. The simultaneous connections can strain or even disable the targeted site's infrastructure, effectively enacting a DDoS attack. Benjamin Flesch believes the flaw came about due to "poor programming practices" and that if OpenAI added some restrictions on how ChatGPT crawled the internet, it wouldn't have the potential to perform an "accidental" DDoS attack on servers. Elad Schulman, founder and chief executive of generative AI security company Lasso Security Inc., agreed with Benjamin Flesch's conclusion while adding another potential exploit for these attacks. Elad believes that if a hacker managed to compromise someone's OpenAI account, they could "easily spend a monthly budget of a large language model-based chatbot in just a day," which would do financial damage if no guard rails are protecting against such practices. Hopefully, as AI evolves, companies will add restrictions to prevent bad actors from abusing their services. For instance, there are already plenty of ways hackers use generative AI in their attacks, and there has been a nasty rise in AI video scam calls as the technology improves in quality.
[2]
Can ChatGPT be used for DDoS attacks? Researchers say yes
According to a report published by The Register, OpenAI's ChatGPT crawler is reportedly vulnerable to manipulation, allowing it to potentially initiate distributed denial of service (DDoS) attacks on arbitrary websites. This issue remains unacknowledged by the tech company. A write-up from security researcher Benjamin Flesch, shared this month via Microsoft's GitHub, details how a single HTTP request to the ChatGPT API can instigate a flood of network requests from the ChatGPT crawler, specifically the ChatGPT-User agent. This vulnerability could amplify one API request into as many afs 5,000 requests directed at a targeted website every second. Flesch describes the flaw as a "severe quality defect" in the handling of HTTP POST requests to a specific API endpoint called by OpenAI's ChatGPT. This endpoint is used to return information on web sources cited in the chatbot's output. When the chatbot references specific URLs, the attributions API fetches information from these sites. An assailant can craft a long list of URLs, each slightly different yet pointing to the same site, resulting in simultaneous requests to that site. According to Flesch, the API does not verify if hyperlinks are repeated within the list or enforce a limit on the total number of hyperlinks submitted. This allows an attacker to send thousands of hyperlinks in a single HTTP request, effectively flooding the target website. Using a tool like Curl, attackers can submit an HTTP POST request to the ChatGPT endpoint without needing an authentication token. OpenAI's servers on Microsoft Azure will respond by initiating requests for each hyperlink sent through the request parameter. This action can overwhelm the targeted website, as the crawler, using Cloudflare, will access the site from different IP addresses with each request. Best practices for preparing your organization for cybersecurity incidents The victim site would likely receive requests from approximately 20 different IP addresses simultaneously, making it difficult for them to trace the source of the attack. Even if a website enables a firewall to block the IPs associated with the ChatGPT bot, the bot will continue to send requests. "Due to this amplification, the attacker can send a small number of requests to the ChatGPT API, but the victim will receive a very large number of requests," Flesch explained. Flesch reported the unauthenticated reflective DDoS vulnerability through multiple channels, including the OpenAI BugCrowd platform and Microsoft's security teams, but received no responses. The Register also reached out to OpenAI for comments but did not receive a reply. Additionally, Flesch pointed out another issue related to this API, which is vulnerable to prompt injection. This flaw allows the crawler to process arbitrary questions using the same attributions API endpoint, rather than only fetching website data as intended. Flesch criticized OpenAI for failing to implement basic security measures, such as deduplicating URLs or limiting the size of URL lists. He speculated that the API might be an experimental project for OpenAI's AI agents, lacking the necessary validation logic to prevent this type of abuse. He noted that established norms in software development typically prevent such flaws to ensure robust performance. "I cannot imagine a highly-paid Silicon Valley engineer designing software like this, because the ChatGPT crawler has been crawling the web for many years, just like the Google crawler," Flesch stated. "If crawlers don't limit their amount of requests to the same website, they will get blocked immediately."
[3]
OpenAI's ChatGPT Crawler Can Be Used to DDoS Websites, Researcher Claims
The vulnerability was given a high severity rating by the researcher OpenAI's ChatGPT application programming interface (API) has a vulnerability that can be exploited to initiate a distributed denial of service (DDoS) attack on websites, according to details shared by a cybersecurity researcher. The chatbot can reportedly be used to send thousands of network requests to a website using the ChatGPT crawler. The researcher claims that the vulnerability, which was given a high severity rating, is still active with no response from the company on when the issue will be fixed. In a GitHub post shared earlier this month, Germany-based security researcher Benjamin Flesch detailed the vulnerability that exists within the ChatGPT API. The researcher also posted code for a proof of concept that sends 50 parallel HTTP requests to a test website, revealing how the bug can be used to trigger a DDoS attack. According to the Flesch, the vulnerability surfaces when handling HTTP POST requests to https://chatgpt.com/backend-api/attributions. It is a method to send data to a server, typically used by the API endpoint to create new resources. While executing this function, the ChatGPT API requires a list of hyperlinks in the URL parameter. In what appears to be a flaw in its API, OpenAI does not check whether a hyperlink to the same resource appears multiple times in the list, according to the researcher. Since hyperlinks to a website can be written in different ways, this results in the crawler sending multiple parallel network requests to the same website. Additionally, Flesch claims OpenAI does not enforce a limit on the maximum number of hyperlinks that can be added to the URL parameter and sent in a single request. As a result, a malicious actor can potentially send thousands of hits to a website, which could quickly overwhelm its server. The security researcher gave this vulnerability a high severity "8.6 CVSS" rating since it is network-based, has low complexity in execution, and requires no privileges or user interaction but can cause a high impact on availability. Flesch claimed to have reached out to both OpenAI and Microsoft (as its servers host the ChatGPT API) about the vulnerability multiple times via different channels after discovering the bug in January. He claimed that he reported it to the OpenAI security team, OpenAI employees via reports, the OpenAI data privacy officer, as well as Microsoft's security and Azure network operations team. Despite making several attempts to flag the vulnerability, the researcher claimed that the issue is neither resolved nor has the AI firm acknowledged its existence. Gadgets 360 staff members were not able to verify the presence of the bug on the chatbot.
[4]
ChatGPT crawler flaw opens door to DDoS, prompt injection
OpenAI's ChatGPT crawler appears to be willing to initiate distributed denial of service (DDoS) attacks on arbitrary websites, a reported vulnerability the tech giant has yet to acknowledge. In a write-up shared this month via Microsoft's GitHub, Benjamin Flesch, a security researcher in Germany, explains how a single HTTP request to the ChatGPT API can be used to flood a targeted website with network requests from the ChatGPT crawler, specifically ChatGPT-User. This flood of connections may or may not be enough to knock over any given site, practically speaking, though it's still arguably a danger and a bit of an oversight by OpenAI. It can be used to amplify a single API request into 20 to 5,000 or more requests to a chosen victim's website, every second, over and over again. "ChatGPT API exhibits a severe quality defect when handling HTTP POST requests to ," Flesch explains in his advisory, referring to an API endpoint called by OpenAI's ChatGPT to return information about web sources cited in the chatbot's output. When ChatGPT mentions specific websites, it will call with a list of URLs to those sites for its crawler to go access and fetch information about. If you throw a big long list of URLs at the API, each slightly different but all pointing to the same site, the crawler will go off and hit every one of them at once. "The API expects a list of hyperlinks in parameter . It is commonly known that hyperlinks to the same website can be written in many different ways," Flesch wrote. "Due to bad programming practices, OpenAI does not check if a hyperlink to the same resource appears multiple times in the list. OpenAI also does not enforce a limit on the maximum number of hyperlinks stored in the urls parameter, thereby enabling the transmission of many thousands of hyperlinks within a single HTTP request." The victim will never know what hit them Thus, using a tool like Curl, an attacker can send an HTTP POST request - without any need for an authentication token - to that ChatGPT endpoint and OpenAI's servers in Microsoft Azure will respond by initiating an HTTP request for each hyperlink submitted via the parameter in the request. When those requests are directed to the same website, they can potentially overwhelm the target, causing DDoS symptoms - the crawler, proxied by Cloudflare, will visit the targeted site from a different IP address each time. "The victim will never know what hit them, because they only see ChatGPT bot hitting their website from about 20 different IP addresses simultaneously," Flesch told The Register, adding that if the victim enabled a firewall to block the IP address range used by the ChatGPT bot, the bot would still send requests. "So one failed/blocked request would not prevent the ChatGPT bot from requesting the victim website again in the next millisecond." "Due to this amplification, the attacker can send a small number of requests to ChatGPT API, but the victim will receive a very large number of requests," Flesch explained. Flesch says he reported this unauthenticated reflective DDoS vulnerability through numerous channels - OpenAI's BugCrowd vulnerability reporting platform, OpenAI's security team email, Microsoft (including Azure) and HackerOne - but has heard nothing. The Register reached out twice to Microsoft-backed OpenAI and we've not heard back. "I'd say the bigger story is that this API was also vulnerable to prompt injection," he said, in reference to a separate vulnerability disclosure. "Why would they have prompt injection for such a simple task? I think it might be because they're dogfooding their autonomous 'AI agent' thing." That second issue can be exploited to make the crawler answer queries via the same API endpoint; you can feed questions to the bot, and it can answer them, when it's really not supposed to do that; it's supposed to just fetch websites. Flesch questioned why OpenAI's bot hasn't implemented simple, established methods to properly deduplicate URLs in a requested list or to limit the size of the list, nor managed to avoid prompt injection vulnerabilities that have been addressed in the main ChatGPT interface. "To me it seems like this small API is an example project of their ChatGPT AI agents, and its task is to parse a URL out of user-provided data and then use Azure to fetch the website," he said. "Does the 'AI agent' not come with built-in security?" he asked. "Because obviously the 'AI agent' thing that was handling the parameter had no concept of resource exhaustion, or why it would be stupid to send thousands of requests in the same second to the same web domain. "Shouldn't it have recognized that and point to the same website and if the request is failing, why would it send a request to immediately afterwards? "These are all small pieces of validation logic that people have been implementing in their software for years, to prevent abuse like this." Flesch said the only explanation that comes to mind is that OpenAI is using an AI Agent to trigger these HTTP requests. "I cannot imagine a highly-paid Silicon Valley engineer designing software like this, because the ChatGPT crawler has been crawling the web for many years, just like the Google crawler," he said. "If crawlers don't limit their amount of requests to the same website, they will get blocked immediately." ®
Share
Share
Copy Link
A security researcher has uncovered a vulnerability in ChatGPT's crawler that could potentially be exploited for DDoS attacks and prompt injection, raising concerns about AI security and OpenAI's response to the issue.
Security researcher Benjamin Flesch has uncovered a significant vulnerability in OpenAI's ChatGPT crawler that could potentially be exploited to launch Distributed Denial of Service (DDoS) attacks on websites 1. The flaw, which Flesch describes as a "severe quality defect," lies in the handling of HTTP POST requests to a specific API endpoint used by ChatGPT 2.
The vulnerability stems from ChatGPT's API not verifying if hyperlinks are repeated within a list or enforcing a limit on the total number of hyperlinks submitted 2. This allows an attacker to send thousands of hyperlinks in a single HTTP request, potentially overwhelming a target website. The ChatGPT crawler, using Cloudflare, accesses the site from different IP addresses with each request, making it difficult for victims to trace the source of the attack 2.
Flesch demonstrated that a single API request could be amplified into 20 to 5,000 or more requests to a chosen victim's website every second 4. This amplification effect means an attacker can send a small number of requests to the ChatGPT API, resulting in a large number of requests to the victim's site 2.
In addition to the DDoS potential, Flesch identified another issue related to prompt injection. This flaw allows the crawler to process arbitrary questions using the same attributions API endpoint, rather than only fetching website data as intended 24.
Flesch assigned the vulnerability a high severity rating of 8.6 CVSS, citing its network-based nature, low complexity in execution, and potential for high impact on availability 3. He reported the issue through multiple channels, including OpenAI's BugCrowd platform and Microsoft's security teams, but claims to have received no response 23.
Despite multiple attempts to flag the vulnerability, Flesch states that the issue remains unresolved, and OpenAI has not acknowledged its existence 3. The Register reached out to OpenAI for comments but did not receive a reply 4.
This vulnerability raises questions about the security practices in AI development. Flesch criticized OpenAI for failing to implement basic security measures, such as deduplicating URLs or limiting the size of URL lists 2. He speculated that the API might be an experimental project for OpenAI's AI agents, lacking necessary validation logic to prevent abuse 4.
Elad Schulman, founder and CEO of Lasso Security Inc., agreed with Flesch's conclusions and highlighted another potential exploit. He suggested that if a hacker compromised someone's OpenAI account, they could "easily spend a monthly budget of a large language model-based chatbot in just a day," potentially causing financial damage 1.
As AI continues to evolve, this incident underscores the need for companies to implement robust security measures to prevent the abuse of their services. It also highlights the importance of responsible disclosure and timely responses to reported vulnerabilities in the AI industry.
Reference
[3]
[4]
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
OpenAI's ChatGPT Search feature is found vulnerable to manipulation through hidden text and prompt injections, raising concerns about the reliability of AI-powered web searches.
2 Sources
2 Sources
A critical vulnerability in ChatGPT's macOS app could have allowed hackers to plant false memories, enabling long-term data exfiltration. The flaw, now patched, highlights the importance of AI security.
2 Sources
2 Sources
OpenAI's ChatGPT faced a significant global outage, affecting millions of users. The company quickly acknowledged the problem, investigated the cause, and implemented a fix, highlighting the growing reliance on AI technologies.
18 Sources
18 Sources
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved