4 Sources
[1]
Anthropic's Model Context Protocol includes a critical remote code execution vulnerability -- newly discovered exploit puts 200,000 AI servers at risk
A design choice in the MCP SDKs allows remote code execution across the AI supply chain. Security researchers at OX Security have exposed an architectural vulnerability in Anthropic's Model Context Protocol (MCP) that enables arbitrary remote code execution on any system running a vulnerable implementation. The flaw affects MCP's official SDKs across Python, TypeScript, Java, and Rust, and ripples through a supply chain spanning more than 150 million downloads and up to 200,000 server instances. Surprisingly, Anthropic declined to patch the protocol in response, telling researchers the behavior was "expected." MCP is the open standard Anthropic created in late 2024 to let AI models connect to external tools, databases, and APIs. It was donated to the Linux Foundation's Agentic AI Foundation last December and has since been adopted by OpenAI, Google, and most major AI coding tools. The vulnerability is in how MCP handles local process execution over its STDIO transport interface. User-controlled input can flow directly into command execution without sanitization -- a design choice baked into the reference SDKs -- meaning that every developer building on MCP inherits the exposure by default. OX Security's research team identified four families of exploitation: unauthenticated UI injection in AI frameworks, hardening bypasses in tools like Flowise that were supposed to be protected, zero-click prompt injection in AI coding IDEs, including Windsurf and Cursor, and malicious package distribution through MCP marketplaces. The researchers successfully poisoned nine out of 11 MCP registries with a test payload and confirmed command execution on six live production platforms with paying customers. The research produced at least 10 CVEs rated high or critical. LiteLLM (CVE-2026-30623) and Bisheng (CVE-2026-33224) have been patched, while Windsurf (CVE-2026-30615), which allowed zero-click local code execution, remains in a "reported" state alongside flaws in GPT Researcher, Agent Zero, LangChain-Chatchat, and DocsGPT. OX Security said it repeatedly recommended a protocol-level fix to Anthropic, such as manifest-only execution or a command allowlist in the SDKs, that would have protected downstream users immediately, but Anthropic reportedly declined and didn't object when the researchers said they intended to publish their report. Ironically, the exposure comes less than a week after Anthropic launched Claude Mythos, a frontier model it's hyping up as a tool to find security vulnerabilities in other organizations' software. That irony wasn't lost on OX's researchers, who noted that the findings were "a call to action" for Anthropic to apply that same commitment in its own infrastructure. It also follows the accidental leak of Claude Code's full source code through a public npm package at the end of March, which exposed roughly 500,000 lines of unobfuscated TypeScript before Anthropic pulled the file. MCP is now under the Linux Foundation's governance, but it's still Anthropic that's responsible for maintaining the reference SDKs where the vulnerability originates. Until its STDIO handling is changed at source, project maintainers will have to implement their own input sanitization. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[2]
MCP 'design flaw' puts 200k servers at risk: Researcher
A design flaw - or expected behavior based on a bad design choice, depending on who is telling the story - baked into Anthropic's official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers. The Ox research team says they "repeatedly" asked Anthropic to patch the root issue, and were repeatedly told the protocol works just fine, thank you, despite 10 (so far) high- and critical-severity CVEs issued for individual open source tools and AI agents that use MCP. A root patch, according to Ox, could have reduced risk across software packages totaling more than 150 million downloads and protected millions of downstream users. Anthropic "declined to modify the protocol's architecture, citing the behavior as 'expected,'" Ox researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in a blog about their research, which began in November 2025 and included more than 30 responsible disclosure processes. A week after their initial report to Anthropic, the AI vendor quietly released an updated security policy - as seems to be the pattern when faced with AI bugs. The updated guidance says MCP adapters, specifically STDIO ones, should be used with caution, the team wrote in a subsequent 30-page paper [PDF]. "This change didn't fix anything," they added. Anthropic did not respond to The Register's inquiries for this story. According to the security sleuths, the root issue lies in MCP, an open source protocol originally developed by Anthropic that LLMs, AI applications, and agents use to connect to external data, systems, and one another. It works across programming languages - which means any developer using Anthropic's official MCP software development kit across any supported language, including Python, TypeScript, Java, and Rust, inherits this vulnerability. MCP uses STDIO (standard input/output) as a local transport mechanism for an AI application to spawn an MCP server as a subprocess. "But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed," the Ox researchers wrote. Abusing this logic can lead to four different types of vulnerabilities. The first type of vulnerability, unauthenticated and authenticated command injection, allows an attacker to enter user-controlled commands that will run directly on the server without authentication or sanitization. This can lead to total system compromise, and any AI framework with a publicly facing UI is vulnerable, we're told. Vulnerable projects include all versions of LangFlow, IBM's open source low-code framework for building AI applications and agents, according to the researchers. They say they disclosed the issue to LangFlow on January 11, and no CVE has been issued. It also affects GPT Researcher, an open source AI agent designed for deep research, and while it doesn't yet have a patch, this one does have a CVE tracker (CVE-2025-65720). The second attack vector, unauthenticated command injection with hardening bypass, allows miscreants to bypass protections and user input sanitization implemented by developers to run commands directly on the server. Both Upsonic (CVE-2026-30625) and Flowise (GHSA-c9gw-hvqq-f33r) have hardened against command injection by allowing only certain commands to run, such as "python," "npm," and "npx." This, in theory, should have made it impossible to directly send the command through the "command" parameter. And yet? "We were able to bypass this behavior by indirectly injecting the command via the allowed command's arguments, for example -'npx -c <command>,'" the Ox team wrote. The third type of vulnerability allows zero-click prompt injection across AI integrated development environments (IDEs) and coding assistants such as Windsurf, Claude Code, Cursor, Gemini-CLI, and GitHub Copilot. However, the only issued CVE that addresses this class of vuln is for Windsurf (CVE-2026-30615). It is also the only true zero-click vuln in that the user's prompt directly influences the MCP JSON configuration with no user interaction. All of the other IDEs and vendors - including Google, Microsoft, and Anthropic - said this was a known issue, or not a valid security vulnerability because it requires explicit user permission to modify the file. Finally, the fourth vulnerability family can be delivered through MCP marketplaces, and the threat hunters say they "successfully poisoned" nine out of 11 of these marketplaces - but using a proof-of-concept MCP that runs a command generating an empty file, not malware. "The marketplaces that accepted our submission include platforms with hundreds of thousands of monthly visitors," the security shop wrote. "A single malicious MCP entry in any of these directories could be installed by thousands of developers before detection - each installation giving an attacker arbitrary command execution on the developer's machine." Ox argues that Anthropic has the ability and responsibility "to make MCP secure by default." "One architectural change at the protocol level would have protected every downstream project, every developer, and every end user who relied on MCP today," the researchers wrote. "That's what it means to own the stack." ®
[3]
Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain
Cybersecurity researchers have discovered a critical "by design" weakness in the Model Context Protocol's (MCP) architecture that could pave the way for remote code execution and have a cascading effect on the artificial intelligence (AI) supply chain. "This flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories," OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in an analysis published last week. The cybersecurity company said the systemic vulnerability is baked into Anthropic's official MCP software development kit (SDK) across any supported language, including Python, TypeScript, Java, and Rust. In all, it affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads. At issue are unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface, resulting in the discovery of 10 vulnerabilities spanning popular projects like LiteLLM, LangChain, LangFlow, Flowise, LettaAI, and LangBot - * CVE-2025-65720 (GPT Researcher) * CVE-2026-30623 (LiteLLM) - Patched * CVE-2026-30624 (Agent Zero) * CVE-2026-30618 (Fay Framework) * CVE-2026-33224 (Bisheng) - Patched * CVE-2026-30617 (Langchain-Chatchat) * CVE-2026-33224 (Jaaz) * CVE-2026-30625 (Upsonic) * CVE-2026-30615 (Windsurf) * CVE-2026-26015 (DocsGPT) - Patched * CVE-2026-40933 (Flowise) These vulnerabilities fall under four broad categories, effectively triggering remote command execution on the server - * Unauthenticated and authenticated command injection via MCP STDIO * Unauthenticated command injection via direct STDIO configuration with hardening bypass * Unauthenticated command injection via MCP configuration edit through zero-click prompt injection * Unauthenticated command injection through MCP marketplaces via network requests, triggering hidden STDIO configurations "Anthropic's Model Context Protocol gives a direct configuration-to-command execution via their STDIO interface on all of their implementations, regardless of programming language," the researchers explained. "As this code was meant to be used in order to start a local STDIO server, and give a handle of the STDIO back to the LLM. But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed." Interestingly, vulnerabilities based on the same core issue have been reported independently over the past year. They include CVE-2025-49596 (MCP Inspector), LibreChat (CVE-2026-22252), WeKnora (CVE-2026-22688), @akoskm/create-mcp-server-stdio (CVE-2025-54994), and Cursor (CVE-2025-54136). Anthropic, however, has declined to modify the protocol's architecture, citing the behavior as "expected. While some of the vendors have issued patches, the shortcoming remains unaddressed in Anthropic's MCP reference implementation, causing developers to inherit the code execution risks. The findings highlight how AI-powered integrations can inadvertently expand the attack surface. To counter the threat, it's advised to block public IP access to sensitive services, monitor MCP tool invocations, run MCP-enabled services in a sandbox, treat external MCP configuration input as untrusted, and only install MCP servers from verified sources. "What made this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol to be what it appeared to be," OX Security said. "Shifting responsibility to implementers does not transfer the risk. It just obscures who created it."
[4]
'This is not a traditional coding error': Experts flag potentially critical security issues at the heart of Anthropic's MCP, exposes 150 million downloads and thousands of servers to complete takeover
* Ox researchers warn Anthropic's Model Context Protocol has systemic RCE flaw * Vulnerability baked into MCP SDKs across Python, TypeScript, Java, Rust * 200,000+ instances exposed; Anthropic says behavior is "expected" Security researchers Ox have claimed Anthropic's Model Context Protocol (MCP) contains a "critical, systemic vulnerability" which puts hundreds of thousands of instances at risk of remote code execution (RCE). Anthropic, on the other hand, allegedly said the system works as intended. MCP is a standard that lets AI tools securely connect to external data sources and apps. It is a vital component of any model because without it, it can only rely on the data it was trained on. The standard is used by both AI companies and developers building AI tools, and it is seen in both OpenAI and DeepMind products, as well as Anthropic's own Claude apps. Millions are affected In its findings, Ox researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, said that what they found in MCP was not a "traditional coding error", but an "architectural design decision baked into Anthropic's official MCP SDKs across every supported programming language, including Python, TypeScript, Java, and Rust." "Any developer building on the Anthropic MCP foundation unknowingly inherits this exposure," they warned. Ox said the flaw can be triggered in different ways, from unauthenticated UI injection, to hardening bypasses in "protected environments"; and from zero-click prompt injection in leading AI IDEs, to malicious marketplace distributions. They claim to have successfully executed commands on six live production platforms and identified critical vulnerabilities in "industry staples like LiteLLM, LangChain, and IBM's LangFlow." The researchers said more than 7,000 publicly accessible servers and up to 200,000 instances are now vulnerable. So far, they've issued 10 CVEs and helped remedy the bugs. "However, the root cause remains unaddressed at the protocol level." Ox also said it reached out to Anthropic and recommended root patches, to which the company said the MCP's behavior is "expected". Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
Share
Copy Link
Security researchers at OX Security discovered a critical architectural vulnerability in Anthropic's Model Context Protocol that enables remote code execution across 200,000 server instances and affects over 150 million downloads. The flaw is baked into MCP's official SDKs across Python, TypeScript, Java, and Rust. Despite repeated requests for a protocol-level fix, Anthropic declined to patch the issue, calling the behavior "expected."
Cybersecurity researchers at OX Security have exposed a critical architectural vulnerability in Anthropic's Model Context Protocol that puts up to 200,000 AI servers at risk of complete takeover through remote code execution
1
. The security vulnerability affects MCP's official SDKs across Python, TypeScript, Java, and Rust, creating a ripple effect throughout the AI supply chain that spans more than 150 million downloads2
. What makes this discovery particularly concerning is that Anthropic has declined to implement a protocol-level fix, telling researchers the behavior was "expected"3
.
Source: Hacker News
The Model Context Protocol is an open standard Anthropic created in late 2024 to enable AI models to connect to external tools, databases, and APIs. It was donated to the Linux Foundation's Agentic AI Foundation in December and has since been adopted by OpenAI, Google, and most major AI coding tools
1
. This widespread adoption means the vulnerability's impact extends far beyond Anthropic's own products.The critical design flaw lies in how MCP handles local process execution over its STDIO transport interface. User-controlled input can flow directly into command execution without input sanitization—an architectural design decision baked into the reference MCP SDKs
4
. According to OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, this means every developer building on MCP inherits the exposure by default3
.The researchers explained that while the STDIO code was intended to start a local server and return a handle to the LLM, "in practice it actually lets anyone run any arbitrary OS command." If the command successfully creates an STDIO server, it returns the handle, but when given a different command, it returns an error after the command is executed
2
.
Source: TechRadar
OX Security's research team identified four families of exploitation that demonstrate the breadth of the attack surface. The first involves unauthenticated and authenticated command injection, allowing attackers to enter user-controlled commands that run directly on the server without authentication or sanitization, potentially leading to total system compromise. Vulnerable projects include all versions of LangFlow, IBM's open-source low-code framework for building AI applications, and GPT Researcher, an open-source AI agent designed for deep research
2
.The second attack vector enables hardening bypasses in tools like Flowise and Upsonic that implemented protections against command injection. Researchers successfully bypassed these safeguards by indirectly injecting commands via allowed command arguments, such as "npx -c 2 1
The fourth exploitation method involves malicious package distribution through MCP marketplaces. The researchers successfully poisoned nine out of 11 MCP registries with a test payload and confirmed command execution on six live production platforms with paying customers
1
.Related Stories
The research produced at least 10 CVEs rated high or critical severity. LiteLLM (CVE-2026-30623) and Bisheng (CVE-2026-33224) have been patched, while Windsurf (CVE-2026-30615), which allowed zero-click local code execution, remains in a "reported" state alongside flaws in GPT Researcher, Agent Zero, LangChain-Chatchat, and DocsGPT
1
. Other affected projects include Flowise (CVE-2026-40933), Upsonic (CVE-2026-30625), and Fay Framework (CVE-2026-30618)3
.OX Security said it repeatedly recommended a protocol-level fix to Anthropic, such as manifest-only execution or a command allowlist in the MCP SDKs, that would have protected downstream users immediately
1
. However, Anthropic declined and didn't object when the researchers said they intended to publish their report. A week after the initial report to Anthropic, the AI vendor quietly released an updated security policy advising that MCP adapters, specifically STDIO ones, should be used with caution. "This change didn't fix anything," the researchers noted2
.The exposure comes less than a week after Anthropic launched Claude Mythos, a frontier model promoted as a tool to find security vulnerabilities in other organizations' software. OX researchers noted that the findings were "a call to action" for Anthropic to apply that same commitment to its own infrastructure
1
. The incident also follows the accidental leak of Claude Code's full source code through a public npm package at the end of March, which exposed roughly 500,000 lines of unobfuscated TypeScript before Anthropic pulled the file1
.
Source: Tom's Hardware
While MCP is now under the Linux Foundation's governance, Anthropic remains responsible for maintaining the reference MCP SDKs where the vulnerability originates. Until its STDIO handling is changed at source, project maintainers will have to implement their own input sanitization
1
. The researchers emphasized that "shifting responsibility to implementers does not transfer the risk. It just obscures who created it"3
. They recommend organizations block public IP access to sensitive services, monitor MCP tool invocations, run MCP-enabled services in a sandbox, treat external MCP configuration input as untrusted, and only install MCP servers from verified sources3
.Summarized by
Navi
[1]
[2]
20 Jan 2026•Technology

11 Jul 2025•Technology

01 May 2025•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
