LiteLLM SQL Injection Flaw Exploited Within 36 Hours, Exposing API Keys and Credentials

2 Sources

Share

A critical SQL injection vulnerability in BerriAI's LiteLLM open-source AI Gateway was exploited within 36 hours of public disclosure. CVE-2026-42208 allows unauthenticated attackers to access sensitive API keys, provider credentials, and configuration secrets stored in the proxy database. The targeted attacks demonstrate how rapidly threat actors can weaponize vulnerabilities in AI infrastructure.

Hackers Target LiteLLM SQL Injection Flaw

Threat actors have begun exploiting CVE-2026-42208, a critical SQL injection vulnerability in BerriAI's LiteLLM, just 36 hours after the flaw became public knowledge on April 24, 2026

1

2

. The open-source AI Gateway, which has garnered 45,000 stars and 7,600 forks on GitHub, serves as a critical middleware layer enabling developers to interact with multiple large language models through a unified API

1

. This vulnerability carries a CVSS score of 9.3 and allows unauthenticated attackers to read and modify the LiteLLM proxy database by sending specially crafted Authorization headers to any LLM API route

2

.

Source: Hacker News

Source: Hacker News

How the Critical SQL Injection Vulnerability Works

The flaw occurs during the proxy API key verification step, where LiteLLM mixed caller-supplied key values directly into query text instead of using parameterized queries

2

. An attacker can exploit this pre-auth weakness without authentication by targeting routes like POST /chat/completions with malicious Authorization: Bearer headers

1

. The vulnerability affects versions >=1.81.16 and <1.83.7, with a fix delivered in version 1.83.7-stable released on April 19, 2026, that replaces string concatenation with parameterized queries

1

2

. The severity stems from what LiteLLM stores: API keys and credentials for providers like OpenAI, Anthropic, and AWS Bedrock, along with virtual keys, master keys, and environment configuration secrets

1

.

Targeted Attacks Demonstrate Sophisticated Knowledge

Researchers at Sysdig observed the first exploitation attempt on April 26 at 16:17 UTC, originating from IP address 65.111.27[.]132

2

. The attacks unfolded in two deliberate phases, with threat actors demonstrating precise knowledge of the database structure. Security researcher Michael Clark noted that attackers went directly to tables containing sensitive data, specifically targeting "litellm_credentials.credential_values" and "litellm_config" while ignoring benign tables like "litellm_users" or "litellm_team"

1

2

. In the second phase, occurring 20 minutes later, the attacker switched to IP address 65.111.25[.]67, likely for evasion purposes, and executed more precise payloads based on information gathered initially

1

2

.

Implications for AI Infrastructure Security

The blast radius of successful vulnerability exploitation extends far beyond typical web application breaches. According to Sysdig, a single litellm_credentials row often contains an OpenAI organization key with five-figure monthly spend caps, an Anthropic console key with workspace admin rights, and AWS Bedrock IAM credentials

2

. This positions the attack closer to a cloud-account compromise than a standard SQL injection incident. The 36-hour window aligns with the broader pattern documented by the Zero Day Clock, showing that exploitation no longer requires public proof-of-concept code—the advisory and open-source schema provide sufficient information for skilled attackers

2

. Adding to concerns, LiteLLM recently faced a supply-chain attack where TeamPCP hackers released malicious PyPI packages deploying infostealers to harvest credentials and secrets

1

.

Immediate Actions Required

Sysdig researchers warn that any internet-exposed LiteLLM instances running vulnerable versions should be treated as potentially compromised

1

. Organizations must immediately rotate credentials, including every virtual API key, master key, and provider credential stored in affected instances

1

. The primary mitigation is to patch to LiteLLM version 1.83.7 or later

1

. For environments where immediate patching isn't feasible, maintainers recommend setting 'disable_error_logs: true' under 'general_settings' as a workaround to block the path through which malicious inputs reach the vulnerable query

1

2

. This incident underscores a troubling pattern in AI infrastructure: critical, pre-auth vulnerabilities in widely trusted software that centralizes cloud-grade credentials, making them high-value targets for sophisticated threat actors

2

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved