4 Sources
4 Sources
[1]
Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration
Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic's Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials. "The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables - executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories," Check Point Research said in a report shared with The Hacker News. The identified shortcomings fall under three broad categories - "If a user started Claude Code in an attacker-controller repository, and the repository included a settings file that set ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code would issue API requests before showing the trust prompt, including potentially leaking the user's API keys," Anthropic said in an advisory for CVE-2026-21852. In other words, simply opening a crafted repository is enough to exfiltrate a developer's active API key, redirect authenticated API traffic to external infrastructure, and capture credentials. This, in turn, can permit the attacker to burrow deeper into the victim's AI infrastructure. This could potentially involve accessing shared project files, modifying/deleting cloud-stored data, uploading malicious content, and even generating unexpected API costs. Successful exploitation of the first vulnerability could trigger stealthy execution on a developer's machine without any additional interaction beyond launching the project. CVE-2025-59536 also achieves a similar goal, the main difference being that repository-defined configurations defined through .mcp.json and claude/settings.json file could be exploited by an attacker to override explicit user approval prior to interacting with external tools and services through the Model Context Protocol (MCP). This is achieved by setting the "enableAllProjectMcpServers" option to true. "As AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer," Check Point said. "What was once considered operational context now directly influences system behavior." "This fundamentally alters the threat model. The risk is no longer limited to running untrusted code - it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code, but with the automation layers surrounding it."
[2]
Security experts flag multiple issues in Claude Code, warning, 'As AI integration deepens, security controls must evolve to match the new trust boundaries'
An AI assistant can quickly turn into a malicious insider, experts warn * Check Point found three vulnerabilities in Claude Code AI coding assistant * Flaws enabled RCE and API key theft * Issues exploited via malicious repositories; all patched before disclosure If you're looking at deeply integrating AI tools into your workflows, be extra careful, as some popular AI models come with severe vulnerabilities which can turn a trusted digital assistant into a malicious insider. Researchers from Check Point (CPR) have detailed three vulnerabilities in Claude Code which can be used to remotely execute malicious code (RCE), or steal sensitive data such as API credentials, from unsuspecting victims. Of the three flaws, two have been labeled: CVE-2025-59536 (8.7/10), and CVE-2026-21852 (5.3/10). The third one that hasn't been assigned a CVE yet, is a code injection vulnerability. Reassessing traditional security assumptions Claude Code is an advanced AI‑powered coding assistant that lets developers work with AI directly inside their coding environment (like their terminal or IDE). The assistant can do all sorts of things, including executing tasks across entire codebases, all based on natural language instructions. CPR says an attacker could create a malicious repository that includes specially crafted project-level configuration files, and share it with a developer (for example, via a phishing email, or a fake job assignment). If the developer clones the repository to their local machine, and opens the project directory in Claude Code, the tool will automatically load it, allowing the attacker to abuse built-in mechanisms and trigger hidden shell commands. As a result, user consent prompts are overridden, and external tools and services initialized before being given explicit approval. Simply put, the attacker can be given remote code execution capabilities or can exfiltrate Anthropic API keys before the user confirms trust in the project. "AI-powered coding tools are rapidly becoming part of enterprise development workflows. Their productivity benefits are significant, but so is the need to reassess traditional security assumptions," CPR said. "Configuration files are no longer passive settings. They can influence execution, networking, and permissions. As AI integration deepens, security controls must evolve to match the new trust boundaries." Fortunately, CPR says all issues were resolved prior to public disclosure. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[3]
Check Point Researchers Uncover Critical Flaws in Claude Code
Redirect authenticated API traffic to external infrastructure In collaborative AI environments, a single compromised key can become a gateway to broader enterprise exposure. This issue was assigned CVE-2026-21852. Why the API Key Exposure Mattered Anthropic's API includes a feature called Workspaces, which allows multiple API keys to share access to project files stored in the cloud. Files are associated with the workspace itself, not a single key. With a stolen key, an attacker could potentially: In collaborative AI ecosystems, a single exposed key can scale from individual compromise to team-wide impact. A New Supply Chain Risk in AI Tools These vulnerabilities reflect a broader structural shift in how software supply chains operate. Modern development platforms increasingly rely on repository-based configuration files to automate workflows and streamline collaboration. Traditionally, these files were treated as passive metadata - not as execution logic. However, as AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer. What was once considered operational context now directly influences system behavior. This fundamentally alters the threat model. The risk is no longer limited to running untrusted code - it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code, but with the automation layers surrounding it.
[4]
Check Point Researchers Expose Critical Claude Code Flaws
By Aviv Donenfeld and Oded Vanunu * Critical vulnerabilities, CVE-2025-59536 and CVE-2026-21852, in Anthropic's Claude Code enabled remote code execution and API key theft through malicious repository-level configuration files, triggered simply by cloning and opening an untrusted project * Built-in mechanisms -- including Hooks, MCP integrations, and environment variables -- could be abused to bypass trust controls, execute hidden shell commands, and redirect authenticated API traffic before user consent * Stolen Anthropic API keys posed enterprise-wide risk, particularly in shared workspaces where a single compromised key could expose, modify, or delete shared files and resources and generate unauthorized costs * The findings highlight a broader shift in the AI supply chain threat model: repository configuration files now function as part of the execution layer, requiring updated security controls to address AI-driven automation risks As organizations rapidly adopt agentic AI development tools into enterprise workflows, the trust boundaries between configuration and execution are increasingly blurred. Check Point Research identified critical vulnerabilities in Anthropic's Claude Code that enabled remote code execution and API credential theft through malicious repository-based configuration files. By abusing built-in mechanisms such as Hooks, Model Context Protocol (MCP) integrations, and environment variables, attackers could execute arbitrary shell commands and exfiltrate API keys when developers cloned and opened untrusted projects - without any additional action beyond launching the tool. In effect, configuration files intended to streamline collaboration became active execution paths, introducing a new attack vector within the AI-powered development layer now embedded in the enterprise supply chain, raising a broader question: has the enterprise threat model evolved to match this new reality? How a Single Repository File Became an Attack Vector Claude Code was designed to streamline collaboration by embedding project-level configuration files directly within repositories, automatically applying them when a developer opens Claude Code inside the project directory. Check Point Research found that these files, typically perceived as harmless operational metadata, could in fact function as an active execution layer. In certain scenarios, simply cloning and opening a malicious repository was enough to: * Trigger hidden commands on the developer's endpoint * Bypass built-in consent and trust safeguards * Expose active Anthropic API keys and turn them into an access vector * Extend the impact from an individual workstation to shared enterprise cloud workspaces * All without any visible indication that a compromise had already begun. What was intended to optimize collaboration effectively became a silent attack vector within the AI-powered development workflow How Developers Could Be Affected The risks fell into three categories. * Silent Command Execution via Claude Hooks Claude Code includes automation capabilities that allow predefined actions to run when a session begins. Check Point Research demonstrated that this mechanism could be abused to execute arbitrary shell commands automatically upon tool initialization. In practice, this means that simply opening a malicious repository could trigger hidden execution on a developer's machine - without any additional interaction beyond launching the project. * MCP User Consent Bypass Claude Code integrates with external tools via the Model Context Protocol (MCP), enabling additional services to be initialized when a project is opened. Although warning prompts were designed to require explicit user approval, researchers found that repository-controlled configuration settings could override these safeguards. As a result, execution could occur: * Before the user granted consent * Without meaningful visibility into what was being initialized * Despite built-in trust prompts intended to prevent such behavior When code runs before trust is established, the control model is inverted - shifting authority from the user to repository-defined configuration and expanding the AI-driven attack surface. This issue was assigned CVE-2025-59536. * API Key Theft Before Trust Confirmation Claude Code communicates with Anthropic's services using an API key, transmitted with each authenticated request. By manipulating a repository-controlled configuration setting, researchers demonstrated that API traffic , including the full authorization header, could be redirected to an attacker-controlled server before the user confirmed trust in the project directory. This meant that simply opening a malicious repository could: * Exfiltrate a developer's active API key * Redirect authenticated API traffic to external infrastructure * Capture credentials before any trust decision was made In collaborative AI environments, a single compromised key can become a gateway to broader enterprise exposure. This issue was assigned CVE-2026-21852. Why the API Key Exposure Mattered Anthropic's API includes a feature called Workspaces, which allows multiple API keys to share access to project files stored in the cloud. Files are associated with the workspace itself, not a single key. With a stolen key, an attacker could potentially: * Access shared project files * Modify or delete cloud-stored data * Upload malicious content * Generate unexpected API costs In collaborative AI ecosystems, a single exposed key can scale from individual compromise to team-wide impact. A New Supply Chain Risk in AI Tools These vulnerabilities reflect a broader structural shift in how software supply chains operate. Modern development platforms increasingly rely on repository-based configuration files to automate workflows and streamline collaboration. Traditionally, these files were treated as passive metadata - not as execution logic. However, as AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer. What was once considered operational context now directly influences system behavior. This fundamentally alters the threat model. The risk is no longer limited to running untrusted code - it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code, but with the automation layers surrounding it. Remediation and Disclosure Check Point Research worked closely with Anthropic throughout the disclosure process. Anthropic implemented fixes that: * Strengthened user trust prompts * Prevented external tool execution before explicit approval * Blocked API communications until after trust confirmation All reported issues have been resolved prior to public disclosure. Why This Matters AI-powered coding tools are rapidly becoming part of enterprise development workflows. Their productivity benefits are significant, but so is the need to reassess traditional security assumptions. Configuration files are no longer passive settings. They can influence execution, networking, and permissions. As AI integration deepens, security controls must evolve to match the new trust boundaries.
Share
Share
Copy Link
Check Point Research disclosed critical security vulnerabilities in Anthropic's Claude Code that enable remote code execution and API key exfiltration. The flaws exploit configuration mechanisms including Hooks, Model Context Protocol servers, and environment variables, executing arbitrary commands when developers clone untrusted repositories. All issues were patched before public disclosure.
Check Point Research has disclosed multiple security vulnerabilities in Anthropic's Claude Code, an AI coding assistant that integrates directly into developer environments. The flaws enable remote code execution and API key theft through malicious repository-based configuration files, fundamentally challenging how developers assess trust in AI-powered development tools
1
. Two vulnerabilities received formal designations: CVE-2025-59536, rated 8.7 out of 10, and CVE-2026-21852, rated 5.3 out of 102
. A third code injection vulnerability was identified but has not yet been assigned a CVE designation2
.
Source: CXOToday
The vulnerabilities exploit various configuration mechanisms embedded within Claude Code, including Hooks, Model Context Protocol (MCP) servers, and environment variables. Simply cloning and opening an untrusted project triggers these exploits without requiring additional user interaction beyond launching the tool
1
. Attackers can craft malicious repositories containing specially designed project-level configuration files and distribute them through phishing emails or fake job assignments2
. When developers open these projects in Claude Code, the tool automatically loads the configuration, allowing attackers to abuse built-in mechanisms and trigger hidden shell commands before user consent prompts appear4
.
Source: DT
CVE-2026-21852 represents a particularly concerning vulnerability involving API key theft before trust confirmation. If a developer opens Claude Code in an attacker-controlled repository containing a settings file that sets ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code issues API requests before displaying the trust prompt, potentially leaking the user's API keys
1
. This authenticated API traffic redirection allows attackers to capture credentials and burrow deeper into the victim's AI infrastructure1
. The implications extend beyond individual compromise, as Anthropic's API includes Workspaces that allow multiple API keys to share access to project files stored in the cloud3
. With a stolen key, attackers could potentially access shared project files, modify or delete cloud-stored data, upload malicious content, and generate unexpected API costs1
.
Source: Hacker News
Related Stories
CVE-2025-59536 achieves similar exploitation goals through a different mechanism. Repository-defined configurations in .mcp.json and claude/settings.json files can be exploited to override explicit user approval prior to interacting with external tools and services through the Model Context Protocol (MCP)
1
. This bypass occurs by setting the "enableAllProjectMcpServers" option to true, allowing execution before the user grants consent and without meaningful visibility into what is being initialized4
. Additionally, Claude Code includes automation capabilities through Hooks that allow predefined actions to run when a session begins. Check Point Research demonstrated that this mechanism could be abused to execute arbitrary shell commands automatically upon tool initialization, triggering hidden execution on a developer's machine without any additional interaction4
.These untrusted project vulnerabilities reflect a fundamental shift in how software supply chains operate. As AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer
1
. What was once considered operational context now directly influences system behavior, fundamentally altering the threat model1
. The risk is no longer limited to running untrusted code—it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code but with the automation layers surrounding it1
. Check Point Research emphasized that configuration files are no longer passive settings but can influence execution, networking, and permissions, requiring organizations to reassess traditional security assumptions as AI integration deepens2
. Security controls must evolve to match the new trust boundaries created by AI-powered development tools2
. Fortunately, all issues were resolved by Anthropic prior to public disclosure2
.Summarized by
Navi
07 Dec 2025•Technology

18 Mar 2026•Technology
07 Aug 2025•Technology

1
Technology

2
Policy and Regulation

3
Business and Economy
