3 Sources
3 Sources
[1]
MCP 'design flaw' puts 200k servers at risk: Researcher
A design flaw - or expected behavior based on a bad design choice, depending on who is telling the story - baked into Anthropic's official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers. The Ox research team says they "repeatedly" asked Anthropic to patch the root issue, and were repeatedly told the protocol works just fine, thank you, despite 10 (so far) high- and critical-severity CVEs issued for individual open source tools and AI agents that use MCP. A root patch, according to Ox, could have reduced risk across software packages totaling more than 150 million downloads and protected millions of downstream users. Anthropic "declined to modify the protocol's architecture, citing the behavior as 'expected,'" Ox researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in a blog about their research, which began in November 2025 and included more than 30 responsible disclosure processes. A week after their initial report to Anthropic, the AI vendor quietly released an updated security policy - as seems to be the pattern when faced with AI bugs. The updated guidance says MCP adapters, specifically STDIO ones, should be used with caution, the team wrote in a subsequent 30-page paper [PDF]. "This change didn't fix anything," they added. Anthropic did not respond to The Register's inquiries for this story. According to the security sleuths, the root issue lies in MCP, an open source protocol originally developed by Anthropic that LLMs, AI applications, and agents use to connect to external data, systems, and one another. It works across programming languages - which means any developer using Anthropic's official MCP software development kit across any supported language, including Python, TypeScript, Java, and Rust, inherits this vulnerability. MCP uses STDIO (standard input/output) as a local transport mechanism for an AI application to spawn an MCP server as a subprocess. "But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed," the Ox researchers wrote. Abusing this logic can lead to four different types of vulnerabilities. The first type of vulnerability, unauthenticated and authenticated command injection, allows an attacker to enter user-controlled commands that will run directly on the server without authentication or sanitization. This can lead to total system compromise, and any AI framework with a publicly facing UI is vulnerable, we're told. Vulnerable projects include all versions of LangFlow, IBM's open source low-code framework for building AI applications and agents, according to the researchers. They say they disclosed the issue to LangFlow on January 11, and no CVE has been issued. It also affects GPT Researcher, an open source AI agent designed for deep research, and while it doesn't yet have a patch, this one does have a CVE tracker (CVE-2025-65720). The second attack vector, unauthenticated command injection with hardening bypass, allows miscreants to bypass protections and user input sanitization implemented by developers to run commands directly on the server. Both Upsonic (CVE-2026-30625) and Flowise (GHSA-c9gw-hvqq-f33r) have hardened against command injection by allowing only certain commands to run, such as "python," "npm," and "npx." This, in theory, should have made it impossible to directly send the command through the "command" parameter. And yet? "We were able to bypass this behavior by indirectly injecting the command via the allowed command's arguments, for example -'npx -c <command>,'" the Ox team wrote. The third type of vulnerability allows zero-click prompt injection across AI integrated development environments (IDEs) and coding assistants such as Windsurf, Claude Code, Cursor, Gemini-CLI, and GitHub Copilot. However, the only issued CVE that addresses this class of vuln is for Windsurf (CVE-2026-30615). It is also the only true zero-click vuln in that the user's prompt directly influences the MCP JSON configuration with no user interaction. All of the other IDEs and vendors - including Google, Microsoft, and Anthropic - said this was a known issue, or not a valid security vulnerability because it requires explicit user permission to modify the file. Finally, the fourth vulnerability family can be delivered through MCP marketplaces, and the threat hunters say they "successfully poisoned" nine out of 11 of these marketplaces - but using a proof-of-concept MCP that runs a command generating an empty file, not malware. "The marketplaces that accepted our submission include platforms with hundreds of thousands of monthly visitors," the security shop wrote. "A single malicious MCP entry in any of these directories could be installed by thousands of developers before detection - each installation giving an attacker arbitrary command execution on the developer's machine." Ox argues that Anthropic has the ability and responsibility "to make MCP secure by default." "One architectural change at the protocol level would have protected every downstream project, every developer, and every end user who relied on MCP today," the researchers wrote. "That's what it means to own the stack." ®
[2]
Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain
Cybersecurity researchers have discovered a critical "by design" weakness in the Model Context Protocol's (MCP) architecture that could pave the way for remote code execution and have a cascading effect on the artificial intelligence (AI) supply chain. "This flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories," OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in an analysis published last week. The cybersecurity company said the systemic vulnerability is baked into Anthropic's official MCP software development kit (SDK) across any supported language, including Python, TypeScript, Java, and Rust. In all, it affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads. At issue are unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface, resulting in the discovery of 10 vulnerabilities spanning popular projects like LiteLLM, LangChain, LangFlow, Flowise, LettaAI, and LangBot - * CVE-2025-65720 (GPT Researcher) * CVE-2026-30623 (LiteLLM) - Patched * CVE-2026-30624 (Agent Zero) * CVE-2026-30618 (Fay Framework) * CVE-2026-33224 (Bisheng) - Patched * CVE-2026-30617 (Langchain-Chatchat) * CVE-2026-33224 (Jaaz) * CVE-2026-30625 (Upsonic) * CVE-2026-30615 (Windsurf) * CVE-2026-26015 (DocsGPT) - Patched * CVE-2026-40933 (Flowise) These vulnerabilities fall under four broad categories, effectively triggering remote command execution on the server - * Unauthenticated and authenticated command injection via MCP STDIO * Unauthenticated command injection via direct STDIO configuration with hardening bypass * Unauthenticated command injection via MCP configuration edit through zero-click prompt injection * Unauthenticated command injection through MCP marketplaces via network requests, triggering hidden STDIO configurations "Anthropic's Model Context Protocol gives a direct configuration-to-command execution via their STDIO interface on all of their implementations, regardless of programming language," the researchers explained. "As this code was meant to be used in order to start a local STDIO server, and give a handle of the STDIO back to the LLM. But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed." Interestingly, vulnerabilities based on the same core issue have been reported independently over the past year. They include CVE-2025-49596 (MCP Inspector), LibreChat (CVE-2026-22252), WeKnora (CVE-2026-22688), @akoskm/create-mcp-server-stdio (CVE-2025-54994), and Cursor (CVE-2025-54136). Anthropic, however, has declined to modify the protocol's architecture, citing the behavior as "expected. While some of the vendors have issued patches, the shortcoming remains unaddressed in Anthropic's MCP reference implementation, causing developers to inherit the code execution risks. The findings highlight how AI-powered integrations can inadvertently expand the attack surface. To counter the threat, it's advised to block public IP access to sensitive services, monitor MCP tool invocations, run MCP-enabled services in a sandbox, treat external MCP configuration input as untrusted, and only install MCP servers from verified sources. "What made this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol to be what it appeared to be," OX Security said. "Shifting responsibility to implementers does not transfer the risk. It just obscures who created it."
[3]
'This is not a traditional coding error': Experts flag potentially critical security issues at the heart of Anthropic's MCP, exposes 150 million downloads and thousands of servers to complete takeover
* Ox researchers warn Anthropic's Model Context Protocol has systemic RCE flaw * Vulnerability baked into MCP SDKs across Python, TypeScript, Java, Rust * 200,000+ instances exposed; Anthropic says behavior is "expected" Security researchers Ox have claimed Anthropic's Model Context Protocol (MCP) contains a "critical, systemic vulnerability" which puts hundreds of thousands of instances at risk of remote code execution (RCE). Anthropic, on the other hand, allegedly said the system works as intended. MCP is a standard that lets AI tools securely connect to external data sources and apps. It is a vital component of any model because without it, it can only rely on the data it was trained on. The standard is used by both AI companies and developers building AI tools, and it is seen in both OpenAI and DeepMind products, as well as Anthropic's own Claude apps. Millions are affected In its findings, Ox researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, said that what they found in MCP was not a "traditional coding error", but an "architectural design decision baked into Anthropic's official MCP SDKs across every supported programming language, including Python, TypeScript, Java, and Rust." "Any developer building on the Anthropic MCP foundation unknowingly inherits this exposure," they warned. Ox said the flaw can be triggered in different ways, from unauthenticated UI injection, to hardening bypasses in "protected environments"; and from zero-click prompt injection in leading AI IDEs, to malicious marketplace distributions. They claim to have successfully executed commands on six live production platforms and identified critical vulnerabilities in "industry staples like LiteLLM, LangChain, and IBM's LangFlow." The researchers said more than 7,000 publicly accessible servers and up to 200,000 instances are now vulnerable. So far, they've issued 10 CVEs and helped remedy the bugs. "However, the root cause remains unaddressed at the protocol level." Ox also said it reached out to Anthropic and recommended root patches, to which the company said the MCP's behavior is "expected". Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
Share
Share
Copy Link
Security researchers from Ox have identified a critical architectural design flaw in Anthropic's Model Context Protocol that exposes over 200,000 servers to remote code execution attacks. The vulnerability affects software packages totaling more than 150 million downloads across Python, TypeScript, Java, and Rust implementations. Despite repeated requests to patch the root issue, Anthropic maintains the protocol's behavior is 'expected.'
A fundamental security vulnerability in Anthropic's Model Context Protocol (MCP) has put as many as 200,000 servers at risk of complete takeover, according to Ox Security researchers. The issue stems from what the research team describes as an MCP design flaw rather than a simple coding error—an architectural design decision baked into the official SDK across Python, TypeScript, Java, and Rust
1
. The vulnerability enables remote code execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories2
.
Source: Hacker News
The Ox research team, comprising Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, says they repeatedly asked Anthropic to patch the root issue but were told the protocol works as intended. The company declined to modify the protocol's architecture, citing the behavior as 'expected'
1
. This decision has significant implications for the AI supply chain, as any developer using Anthropic's official MCP software development kit inherits this vulnerability across any supported language2
.The root cause lies in how MCP uses STDIO (standard input/output) as a local transport mechanism for AI applications to spawn an MCP server as a subprocess. While this code was designed to start a local STDIO server and return a handle to the LLM, in practice it allows anyone to run arbitrary OS commands
1
. When given a command that successfully creates an STDIO server, it returns the handle, but when given a different command, it returns an error after the command is executed2
.This architectural flaw affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads
2
. The vulnerability has cascading effects across the AI ecosystem because MCP serves as a standard that lets AI tools connect to external data sources and applications—a vital component used by OpenAI, DeepMind, and Anthropic's own Claude apps3
.Ox researchers identified four distinct vulnerability types that abuse this logic. The first involves unauthenticated and authenticated command injection, allowing attackers to enter user-controlled commands that run directly on the server without authentication or sanitization, potentially leading to complete server takeover
1
. Vulnerable projects include all versions of LangFlow, IBM's open source low-code framework for building AI applications, and GPT Researcher, which has been assigned CVE-2025-657201
.The second attack vector enables unauthenticated command injection with hardening bypass, allowing attackers to circumvent protections implemented by developers. Both Upsonic (CVE-2026-30625) and Flowise (CVE-2026-40933) had hardened against command injection by allowing only certain commands like 'python,' 'npm,' and 'npx,' but researchers bypassed this by indirectly injecting commands via allowed command arguments
1
. The third vulnerability type allows zero-click prompt injection across AI integrated development environments including Windsurf (CVE-2026-30615), Claude Code, Cursor, Gemini-CLI, and GitHub Copilot, though vendors including Google, Microsoft, and Anthropic said this was either a known issue or not a valid security vulnerability1
.Related Stories
The research began in November 2025 and included more than 30 responsible disclosure processes, resulting in 10 high- and critical-severity CVE designations so far
1
. Affected projects include LiteLLM (CVE-2026-30623), Agent Zero (CVE-2026-30624), Fay Framework (CVE-2026-30618), Bisheng (CVE-2026-33224), Langchain-Chatchat (CVE-2026-30617), and DocsGPT (CVE-2026-26015), among others2
. While some vendors have issued patches, the shortcoming remains unaddressed in Anthropic's MCP reference implementation, causing developers to continue inheriting the code execution risks2
.Ox researchers also successfully poisoned nine out of 11 MCP marketplaces with a proof-of-concept that runs a command generating an empty file. The marketplaces that accepted their submission include platforms with hundreds of thousands of monthly visitors, meaning a single malicious MCP entry could be installed by thousands of developers before detection
1
. A week after the initial report to Anthropic, the AI vendor quietly released an updated security policy stating that MCP adapters, specifically STDIO ones, should be used with caution—a change the research team says 'didn't fix anything'1
.What makes this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol. Ox Security emphasizes that shifting responsibility to implementers does not transfer the risk—it just obscures who created it
2
. To counter the threat, experts recommend blocking public IP access to sensitive services, monitoring MCP tool invocations, running MCP-enabled services in a sandbox, treating external MCP configuration input as untrusted, and only installing MCP servers from verified sources2
. Anthropic did not respond to inquiries about the issue1
.
Source: TechRadar
Summarized by
Navi
[1]
20 Jan 2026•Technology

01 May 2025•Technology

11 Jul 2025•Technology

1
Policy and Regulation

2
Policy and Regulation

3
Technology
