Researchers expose 26 malicious LLM routers stealing crypto credentials through hidden attacks

Reviewed byNidhi Govil

2 Sources

Share

University of California researchers uncovered alarming security vulnerabilities in third-party LLM routers that developers use to access AI models. Their study of 428 routers found 26 actively injecting malicious code and stealing credentials, with one draining Ethereum from a test wallet. The findings expose critical trust issues in the AI supply chain that could lead to crypto theft.

Third-Party LLM Routers Create Critical Security Vulnerabilities

University of California researchers have exposed a dangerous weakness in the AI development ecosystem that puts cryptocurrency assets at direct risk. A comprehensive study published on arXiv reveals that third-party LLM routers—intermediary services that route requests between AI agents and major model providers like OpenAI, Anthropic, and Google—harbor significant security vulnerabilities that enable crypto theft and credential exfiltration

1

.

Source: CCN.com

Source: CCN.com

The research team, including co-author Chaofan Shou from UC Santa Barbara and UC San Diego, tested 28 paid routers and 400 free routers collected from public communities. Their findings paint a troubling picture: nine routers were actively injecting malicious code, 17 accessed researcher-owned Amazon Web Services credentials, and one router successfully drained Ethereum from a researcher-controlled private key

1

. "26 LLM routers are secretly injecting malicious tool calls and stealing creds," Shou warned on X.

How Malicious Code Injection Exploits AI Coding Agents

The core problem stems from how these API intermediaries handle data. LLM routers terminate Internet TLS (Transport Layer Security) connections and maintain full plaintext access to every message passing through their infrastructure

2

. This architectural design creates what researchers describe as a critical trust boundary that the ecosystem currently treats as transparent transport.

Developers using AI coding agents such as Claude Code to work on smart contracts or cryptocurrency wallets could unknowingly pass private keys, seed phrases, and sensitive data through router infrastructure that hasn't been properly screened or secured

1

. The researchers identified four distinct attack vectors, including payload injection and secret exfiltration, which allow attackers to rewrite tool calls or silently collect cryptocurrency credentials without detection

2

.

Source: Cointelegraph

Source: Cointelegraph

Real-World Ethereum Test Demonstrates Theft Capabilities

To assess the actual impact of these security vulnerabilities, researchers conducted experiments using pre-funded Ethereum wallet "decoy keys" with nominal balances. One router successfully drained ETH from a researcher-owned private key, demonstrating that funds could be accessed once sensitive keys pass through compromised routing infrastructure

1

. While the value lost remained below $50, the experiment proved that "a single rewritten tool call is sufficient for arbitrary code execution," potentially allowing attackers to manipulate transactions at scale .

The study also uncovered what researchers termed "YOLO mode"—a setting in many agent frameworks where AI systems execute commands automatically without asking users to confirm each action. This feature amplifies supply chain risks, as compromised routers can execute malicious tool calls without any human intervention or awareness

1

.

Supply Chain Attack Spreads Across AI Ecosystem

The researchers conducted two "poisoning studies" revealing how even benign routers become dangerous once they reuse leaked credentials through weak relays. In one experiment, they deliberately leaked a controlled OpenAI API key across Chinese forums, WeChat, and Telegram groups. The compromised AI API keys quickly spread, generating 100 million GPT-5.4 tokens and several Codex sessions .

In another test involving weakly secured relay services including Sub2API, claude-relay-service, and CLIProxyAPI, researchers observed thousands of unauthorized access attempts. These systems were later used to process approximately 2 billion GPT-5.4/5.3-codex tokens, exposing 99 credentials across 440 Codex sessions . This demonstrates how supply chain risks can cascade across the entire AI infrastructure, with stolen access spreading to different services without the original user's knowledge.

Detection Challenges and Invisible Threat Boundaries

One of the most concerning aspects of these security vulnerabilities is the difficulty in detecting when a router has been compromised. "The boundary between 'credential handling' and 'credential theft' is invisible to the client because routers already read secrets in plaintext as part of normal forwarding," the researchers explained

1

. Previously legitimate routers can be silently weaponized without the operator even knowing, while free routers may be stealing cryptocurrency credentials while offering cheap API access as bait.

Cryptographic Signing Needed for Long-Term Protection

While client-side defenses can reduce exposure, researchers emphasized that current measures don't fully address the underlying trust issues. They recommended that developers never let private keys or seed phrases transit an AI agent session as an immediate precaution

1

.

The long-term solution requires cryptographic signing of model outputs by AI companies, enabling response integrity verification. "No client-side control available today can prove that a router preserved the upstream provider's response," the paper stated . Researchers called for provider-backed response integrity so that tool calls executed by agents can be cryptographically tied to what the upstream model actually produced, creating industry-wide standards that ensure responses haven't been modified in transit.

Until such protections are implemented, developers should treat third-party routing services as high-risk components in the AI supply chain, particularly when handling sensitive data or executing automated actions involving cryptocurrency assets.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo