2 Sources
[1]
Hackers are exploiting a critical LiteLLM pre-auth SQLi flaw
Hackers are targeting sensitive information stored in the LiteLLM open-source large-language model (LLM) gateway by exploiting a critical vulnerability tracked as CVE-2026-42208. The flaw is an SQL injection issue that occurs during LiteLLM's proxy API key verification step. An attacker can exploit it without authentication by sending a specially crafted Authorization header to any LLM API route. This allows reading data from the proxy's database and modifying it. According to the maintainer's security advisory, threat actors could use it for "unauthorised access to the proxy and the credentials it manages." A fix was delivered in LiteLLM version 1.83.7 to replace string concatenation with parameterized queries. LiteLLM stores API keys, virtual and master keys, and environment/config secrets, so accessing its database allows hackers to read sensitive data they may then use to launch additional attacks. LiteLLM is a popular proxy/SDK middleware layer that enables users to call AI models via a single unified API. The project is widely used by developers of LLM apps and platforms managing multiple models. It has 45k stars and 7.6k forks on GitHub. The project has also recently been targeted in a supply-chain attack, where TeamPCP hackers released malicious PyPI packages that deployed an infostealer to harvest credentials, tokens, and secrets from infected systems. In a report from researchers at Sysdig, a cloud security company, say that CVE-2026-42208 exploitation started approximately 36 hours after the bug was disclosed publicly on April 24. The researchers observed deliberate and targeted exploitation attempts that sent crafted requests to '/chat/completions' with a malicious 'Authorization: Bearer' header. These requests queried specific tables that contained API keys, provider (OpenAI, Anthropic, Bedrock) credentials, environment data, and configs. Sysdig explained that there were no probes against benign tables, and "the operator went straight to where the secrets live," a strong indicator that the attacker knew exactly what to target. In the second phase of the attack, the threat actor switched IP addresses, likely for evasion, reran the same SQL injection attempts, but focused on the correct table names and structures derived in the previous phase, now using fewer, more precise payloads. Sysdig comments that, while 36 hours is not as quick as exploiting a recent flaw in Marimo, the attacks were targeted and specific. The researchers warned that exposed LiteLMM instances still running vulnerable versions should be treated as potentially compromised, and every virtual API key, master key, and provider credential stored in internet-exposed LiteLLM instances should be rotated. For those who can't upgrade to LiteLLM 1.83.7 and later, the maintainers suggest the workaround of setting 'disable_error_logs: true' under 'general_settings' to block the path through which malicious inputs can reach the vulnerable query.
[2]
LiteLLM CVE-2026-42208 SQL Injection Exploited within 36 Hours of Disclosure
In yet another instance of threat actors quickly jumping on the exploitation bandwagon, a newly disclosed critical security flaw in BerriAI's LiteLLM Python package has come under active exploitation in the wild within 36 hours of the bug becoming public knowledge. The vulnerability, tracked as CVE-2026-42208 (CVSS score: 9.3), is an SQL injection that could be exploited to modify the underlying LiteLLM proxy database. "A database query used during proxy API key checks mixed the caller-supplied key value into the query text instead of passing it as a separate parameter," LiteLLM maintainers said in an alert last week. "An unauthenticated attacker could send a specially crafted Authorization header to any LLM API route (for example, POST /chat/completions) and reach this query through the proxy's error-handling path. An attacker could read data from the proxy's database and may be able to modify it, leading to unauthorized access to the proxy and the credentials it manages." The shortcoming affects the following versions - * >=1.81.16 * <1.83.7 While the vulnerability was addressed in version 1.83.7-stable released on April 19, 2026, the first exploitation attempt was recorded on April 26 at 16:17 UTC, roughly 26 hours and seven minutes after the GitHub advisory was indexed in the global GitHub Advisory Database. The SQL injection activity, per Sysdig, originated from the IP address 65.111.27[.]132. "Malicious activity fell into two phases driven by the same operator across two adjacent egress IPs, followed by a brief unauthenticated probe of the key-management endpoints," security researcher Michael Clark said. Specifically, the unknown threat actor is said to have targeted database tables like "litellm_credentials.credential_values" and "litellm_config" that hold information related to upstream large language model (LLM) provider keys and the proxy runtime environment. No probes were observed against tables like "litellm_users" or "litellm_team." This suggests that the attacker was not only aware of these tables, but also went after those that hold sensitive secrets. In the second phase of the attack, observed after 20 minutes, the threat actor used a different IP address ("65.111.25[.]67"), this time abusing the access to run a similar probe. LiteLLM is a popular, open-source AI Gateway software with over 45,000 stars and 7,600 forks on GitHub. Last month, the project was the target of a supply chain attack orchestrated by the TeamPCP hacking group to steal credentials and secrets from downstream users. "A single litellm_credentials row often holds an OpenAI organization key with five-figure monthly spend caps, an Anthropic console key with workspace admin rights, and an AWS Bedrock IAM credential," Sysdig said. "The blast radius of a successful database extraction is closer to a cloud-account compromise than a typical web-app SQL injection." Users are advised to patch their instances to the latest version. If this is not an immediate option, the maintainers recommend setting "disable_error_logs: true" under "general_settings" to remove the path through which untrusted input reaches the vulnerable query. "The LiteLLM vulnerability (GHSA-r75f-5x8p-qvmc) continues the modal pattern for AI-infrastructure advisories: critical, pre-auth, and in software with five-figure star counts that operators trust to centralize cloud-grade credentials," Sysdig added. "The 36-hour exploit window is consistent with the broader collapse documented by the Zero Day Clock, and the operator behavior we recorded (verbatim Prisma table names, three-table targeting, deliberate column-count enumeration) shows that exploitation no longer waits for a public PoC. The advisory and the open-source schema were ultimately enough."
Share
Copy Link
A critical SQL injection vulnerability in BerriAI's LiteLLM open-source AI Gateway was exploited within 36 hours of public disclosure. CVE-2026-42208 allows unauthenticated attackers to access sensitive API keys, provider credentials, and configuration secrets stored in the proxy database. The targeted attacks demonstrate how rapidly threat actors can weaponize vulnerabilities in AI infrastructure.
Threat actors have begun exploiting CVE-2026-42208, a critical SQL injection vulnerability in BerriAI's LiteLLM, just 36 hours after the flaw became public knowledge on April 24, 2026
1
2
. The open-source AI Gateway, which has garnered 45,000 stars and 7,600 forks on GitHub, serves as a critical middleware layer enabling developers to interact with multiple large language models through a unified API1
. This vulnerability carries a CVSS score of 9.3 and allows unauthenticated attackers to read and modify the LiteLLM proxy database by sending specially crafted Authorization headers to any LLM API route2
.
Source: Hacker News
The flaw occurs during the proxy API key verification step, where LiteLLM mixed caller-supplied key values directly into query text instead of using parameterized queries
2
. An attacker can exploit this pre-auth weakness without authentication by targeting routes like POST /chat/completions with malicious Authorization: Bearer headers1
. The vulnerability affects versions >=1.81.16 and <1.83.7, with a fix delivered in version 1.83.7-stable released on April 19, 2026, that replaces string concatenation with parameterized queries1
2
. The severity stems from what LiteLLM stores: API keys and credentials for providers like OpenAI, Anthropic, and AWS Bedrock, along with virtual keys, master keys, and environment configuration secrets1
.Researchers at Sysdig observed the first exploitation attempt on April 26 at 16:17 UTC, originating from IP address 65.111.27[.]132
2
. The attacks unfolded in two deliberate phases, with threat actors demonstrating precise knowledge of the database structure. Security researcher Michael Clark noted that attackers went directly to tables containing sensitive data, specifically targeting "litellm_credentials.credential_values" and "litellm_config" while ignoring benign tables like "litellm_users" or "litellm_team"1
2
. In the second phase, occurring 20 minutes later, the attacker switched to IP address 65.111.25[.]67, likely for evasion purposes, and executed more precise payloads based on information gathered initially1
2
.Related Stories
The blast radius of successful vulnerability exploitation extends far beyond typical web application breaches. According to Sysdig, a single litellm_credentials row often contains an OpenAI organization key with five-figure monthly spend caps, an Anthropic console key with workspace admin rights, and AWS Bedrock IAM credentials
2
. This positions the attack closer to a cloud-account compromise than a standard SQL injection incident. The 36-hour window aligns with the broader pattern documented by the Zero Day Clock, showing that exploitation no longer requires public proof-of-concept code—the advisory and open-source schema provide sufficient information for skilled attackers2
. Adding to concerns, LiteLLM recently faced a supply-chain attack where TeamPCP hackers released malicious PyPI packages deploying infostealers to harvest credentials and secrets1
.Sysdig researchers warn that any internet-exposed LiteLLM instances running vulnerable versions should be treated as potentially compromised
1
. Organizations must immediately rotate credentials, including every virtual API key, master key, and provider credential stored in affected instances1
. The primary mitigation is to patch to LiteLLM version 1.83.7 or later1
. For environments where immediate patching isn't feasible, maintainers recommend setting 'disable_error_logs: true' under 'general_settings' as a workaround to block the path through which malicious inputs reach the vulnerable query1
2
. This incident underscores a troubling pattern in AI infrastructure: critical, pre-auth vulnerabilities in widely trusted software that centralizes cloud-grade credentials, making them high-value targets for sophisticated threat actors2
.Summarized by
Navi
[1]
24 Mar 2026•Technology

26 Dec 2025•Technology

21 Jan 2026•Technology

1
Health

2
Technology

3
Technology
