2 Sources
2 Sources
[1]
AI Routers Can Steal Credentials and Crypto - Research
University of California researchers have discovered that some third-party AI large language model (LLM) routers can pose security vulnerabilities that can lead to crypto theft. A paper measuring malicious intermediary attacks on the LLM supply chain, published on Thursday by the researchers, revealed four attack vectors, including malicious code injection and extraction of credentials. "26 LLM routers are secretly injecting malicious tool calls and stealing creds," said the paper's co-author, Chaofan Shou, on X. LLM agents increasingly route requests through third-party API intermediaries or routers that aggregate access to providers like OpenAI, Anthropic and Google. However, these routers terminate Internet TLS (Transport Layer Security) connections and have full plaintext access to every message. This means that developers using AI coding agents such as Claude Code to work on smart contracts or wallets could be passing private keys, seed phrases and sensitive data through router infrastructure that has not been screened or secured. The researchers tested 28 paid routers and 400 free routers collected from public communities. Their findings were startling, with nine routers actively injecting malicious code, two deploying adaptive evasion triggers, 17 accessing researcher-owned Amazon Web Services credentials, and one draining Ether (ETH) from a researcher-owned private key. Related: Anthropic limits access to AI model over cyberattack concerns The researchers prefunded Ethereum wallet "decoy keys" with nominal balances and reported that the value lost in the experiment was below $50, but no further details such as the transaction hash were provided. The authors also ran two "poisoning studies" showing that even benign routers become dangerous once they reuse leaked credentials through weak relays. The researchers said it was not easy to detect when a router was malicious. "The boundary between 'credential handling' and 'credential theft' is invisible to the client because routers already read secrets in plaintext as part of normal forwarding." Another unsettling find was what the researchers called "YOLO mode." This is a setting in many AI agent frameworks where the agent executes commands automatically without asking the user to confirm each one. Previously legitimate routers can be silently weaponized without the operator even knowing, while free routers may be stealing credentials while offering cheap API access as the lure, the researchers found. "LLM API routers sit on a critical trust boundary that the ecosystem currently treats as transparent transport." The researchers recommended that developers using AI agents to code should bolster client-side defenses, suggesting never letting private keys or seed phrases transit an AI agent session. The long-term fix is for AI companies to cryptographically sign their responses so the instructions an agent executes can be mathematically verified as coming from the actual model.
[2]
Will AI Steal Your Bitcoin? New Research Reveals 26 Malicious LLM Routers Linked to Crypto Theft
Crypto wallets and credentials are directly exposed to the LLMs. Third-party services used to route requests between AI agents and large language models could expose users to credential theft and crypto losses, according to new research from U.S. academics studying the emerging "LLM router" ecosystem. The paper, published on arXiv by researchers from the University of California, Santa Barbara, University of California, San Diego and others, examined how these intermediary systems handle traffic between users and model providers such as OpenAI, Anthropic and Google. LLM Routers Create New Security Risks The researchers said LLM routers operate as intermediaries with "full plaintext access to every in-flight JSON payload." This gives LLMs visibility into sensitive data such as API keys and prompts. In tests of 28 paid routers and 400 free ones, the study found that one paid router and eight free routers were "injecting malicious code into returned tool calls." The other 17 accessed researcher-controlled cloud credentials. The LLM routers paper warned this creates a critical trust issue. They noted that "a single malicious or compromised router" can rewrite tool calls or extract credentials without detection. It identified four attack types: including "payload injection (AC-1)" and "secret exfiltration (AC-2)." These attacks let attackers alter commands that AI agents execute or silently collect sensitive data. Ethereum Test Demonstrates Theft Risk To assess real-world impact, the researchers tested crypto credential exposure using pre-funded Ethereum wallets. They reported "1 [router] draining ETH from a researcher-owned Ethereum private key," showing that funds could be accessed once sensitive keys pass through compromised routing infrastructure. While the experiment involved minimal funds, it demonstrated how "a single rewritten tool call is sufficient for arbitrary code execution," potentially allowing attackers to manipulate transactions. The authors said this highlights risks for developers using AI agents to interact with crypto wallets, particularly where systems can execute commands automatically. LLM Router and Wider Supply Chain Vulnerabilities The study also examined how compromised credentials can spread across the AI ecosystem. Researchers deliberately leaked a controlled OpenAI API key across Chinese forums, WeChat, and Telegram groups. LLMs quickly picked up the compromised API keys and generated "100 million GPT-5.4 tokens" and several Codex sessions. This showed how stolen access can spread across different services without the original user knowing. In another test, the researchers set up weakly secured relay services, including Sub2API, claude-relay-service and CLIProxyAPI. These systems were quickly targeted, receiving thousands of unauthorized access attempts. They were later used to process about "2B GPT-5.4 / 5.3-codex tokens," exposing "99 credentials across 440 Codex sessions." Many of these sessions were running in "YOLO mode," meaning commands were executed automatically without user approval. The findings suggest that even services that appear safe can become part of an attack chain if they rely on leaked keys. Calls For Stronger Safeguards While some client-side protections can reduce risk, the researchers said current measures do not fully address the issue. They argued that securing AI agents will require stronger guarantees from model providers. Researchers called for "provider-backed response integrity so that the tool call an agent executes can be cryptographically tied to what the upstream model actually produced." Existing defenses, such as policy controls, can limit exposure but remain incomplete, researchers said. "No client-side control available today can prove that a router preserved the upstream provider's response," the paper said. The researchers pointed to the need for industry-wide standards, including cryptographic signing of model outputs, to ensure responses have not been modified in transit. Until then, they cautioned developers to treat third-party routing services as a high-risk component in the AI supply chain. This is particularly important when handling sensitive data or executing automated actions. AI Seen as Potential Driver for Crypto Adoption The growing intersection between AI and blockchain is also fueling expectations that crypto could play a larger role in the next wave of its development. Venture capitalist Marc Andreessen described the convergence as a "grand unification." He argued that autonomous AI systems will require native digital payment infrastructure to operate effectively. Andreessen said, "I think AI is the killer crypto app," on the Latent Space podcast. He pointed to early signs of adoption as advanced users experiment with financially autonomous AI agents. "My friends... have given their [AI agents] bank accounts and credit cards," he said, adding that while adoption is still limited, "it will grow. That's how these things start." Analysts say the implication is that crypto-native payment systems could offer a more efficient alternative to traditional banking rails for machine-to-machine transactions, particularly as AI agents begin to operate independently. Ethereum Positioned as Key Beneficiary Market observers increasingly point to Ethereum as a likely beneficiary of this trend. This is particularly because of its role as the leading programmable blockchain. Some analysts argue Ethereum's role extends beyond that of a digital currency. Motley Fool analyst Dominic Basalto said describing Ethereum as simply a crypto "does it a major disservice," framing it instead as a broader computing platform. He added that Ethereum "continues to be the clear market leader in key blockchain niches." Basalto highlighted Ethereum's dominance across decentralized applications and financial use cases. Meanwhile, Tom Lee, chairman of BitMine, has also identified AI as one of the key drivers of Ethereum's long-term growth. Lee said 2026 could be a "defining year for Ethereum," pointing to the potential for AI agents to use the network for payments and verification. "If Bitcoin gets to $250,000, that would value Ethereum somewhere between $12,000 and $22,000 if it returns to its 2021 ratio," he said.
Share
Share
Copy Link
University of California researchers uncovered alarming security vulnerabilities in third-party LLM routers that developers use to access AI models. Their study of 428 routers found 26 actively injecting malicious code and stealing credentials, with one draining Ethereum from a test wallet. The findings expose critical trust issues in the AI supply chain that could lead to crypto theft.
University of California researchers have exposed a dangerous weakness in the AI development ecosystem that puts cryptocurrency assets at direct risk. A comprehensive study published on arXiv reveals that third-party LLM routers—intermediary services that route requests between AI agents and major model providers like OpenAI, Anthropic, and Google—harbor significant security vulnerabilities that enable crypto theft and credential exfiltration
1
.
Source: CCN.com
The research team, including co-author Chaofan Shou from UC Santa Barbara and UC San Diego, tested 28 paid routers and 400 free routers collected from public communities. Their findings paint a troubling picture: nine routers were actively injecting malicious code, 17 accessed researcher-owned Amazon Web Services credentials, and one router successfully drained Ethereum from a researcher-controlled private key
1
. "26 LLM routers are secretly injecting malicious tool calls and stealing creds," Shou warned on X.The core problem stems from how these API intermediaries handle data. LLM routers terminate Internet TLS (Transport Layer Security) connections and maintain full plaintext access to every message passing through their infrastructure
2
. This architectural design creates what researchers describe as a critical trust boundary that the ecosystem currently treats as transparent transport.Developers using AI coding agents such as Claude Code to work on smart contracts or cryptocurrency wallets could unknowingly pass private keys, seed phrases, and sensitive data through router infrastructure that hasn't been properly screened or secured
1
. The researchers identified four distinct attack vectors, including payload injection and secret exfiltration, which allow attackers to rewrite tool calls or silently collect cryptocurrency credentials without detection2
.
Source: Cointelegraph
To assess the actual impact of these security vulnerabilities, researchers conducted experiments using pre-funded Ethereum wallet "decoy keys" with nominal balances. One router successfully drained ETH from a researcher-owned private key, demonstrating that funds could be accessed once sensitive keys pass through compromised routing infrastructure
1
. While the value lost remained below $50, the experiment proved that "a single rewritten tool call is sufficient for arbitrary code execution," potentially allowing attackers to manipulate transactions at scale .The study also uncovered what researchers termed "YOLO mode"—a setting in many agent frameworks where AI systems execute commands automatically without asking users to confirm each action. This feature amplifies supply chain risks, as compromised routers can execute malicious tool calls without any human intervention or awareness
1
.The researchers conducted two "poisoning studies" revealing how even benign routers become dangerous once they reuse leaked credentials through weak relays. In one experiment, they deliberately leaked a controlled OpenAI API key across Chinese forums, WeChat, and Telegram groups. The compromised AI API keys quickly spread, generating 100 million GPT-5.4 tokens and several Codex sessions .
In another test involving weakly secured relay services including Sub2API, claude-relay-service, and CLIProxyAPI, researchers observed thousands of unauthorized access attempts. These systems were later used to process approximately 2 billion GPT-5.4/5.3-codex tokens, exposing 99 credentials across 440 Codex sessions . This demonstrates how supply chain risks can cascade across the entire AI infrastructure, with stolen access spreading to different services without the original user's knowledge.
Related Stories
One of the most concerning aspects of these security vulnerabilities is the difficulty in detecting when a router has been compromised. "The boundary between 'credential handling' and 'credential theft' is invisible to the client because routers already read secrets in plaintext as part of normal forwarding," the researchers explained
1
. Previously legitimate routers can be silently weaponized without the operator even knowing, while free routers may be stealing cryptocurrency credentials while offering cheap API access as bait.While client-side defenses can reduce exposure, researchers emphasized that current measures don't fully address the underlying trust issues. They recommended that developers never let private keys or seed phrases transit an AI agent session as an immediate precaution
1
.The long-term solution requires cryptographic signing of model outputs by AI companies, enabling response integrity verification. "No client-side control available today can prove that a router preserved the upstream provider's response," the paper stated . Researchers called for provider-backed response integrity so that tool calls executed by agents can be cryptographically tied to what the upstream model actually produced, creating industry-wide standards that ensure responses haven't been modified in transit.
Until such protections are implemented, developers should treat third-party routing services as high-risk components in the AI supply chain, particularly when handling sensitive data or executing automated actions involving cryptocurrency assets.
Summarized by
Navi
[1]
14 May 2025•Technology

02 Dec 2025•Technology

04 Sept 2025•Technology
