Claude Code security vulnerabilities expose developers to API key theft and remote code execution

4 Sources

Share

Check Point Research disclosed critical security vulnerabilities in Anthropic's Claude Code that enable remote code execution and API key exfiltration. The flaws exploit configuration mechanisms including Hooks, Model Context Protocol servers, and environment variables, executing arbitrary commands when developers clone untrusted repositories. All issues were patched before public disclosure.

Critical Security Vulnerabilities Discovered in Claude Code

Check Point Research has disclosed multiple security vulnerabilities in Anthropic's Claude Code, an AI coding assistant that integrates directly into developer environments. The flaws enable remote code execution and API key theft through malicious repository-based configuration files, fundamentally challenging how developers assess trust in AI-powered development tools

1

. Two vulnerabilities received formal designations: CVE-2025-59536, rated 8.7 out of 10, and CVE-2026-21852, rated 5.3 out of 10

2

. A third code injection vulnerability was identified but has not yet been assigned a CVE designation

2

.

Source: CXOToday

Source: CXOToday

How Malicious Repositories Become Attack Vectors

The vulnerabilities exploit various configuration mechanisms embedded within Claude Code, including Hooks, Model Context Protocol (MCP) servers, and environment variables. Simply cloning and opening an untrusted project triggers these exploits without requiring additional user interaction beyond launching the tool

1

. Attackers can craft malicious repositories containing specially designed project-level configuration files and distribute them through phishing emails or fake job assignments

2

. When developers open these projects in Claude Code, the tool automatically loads the configuration, allowing attackers to abuse built-in mechanisms and trigger hidden shell commands before user consent prompts appear

4

.

Source: DT

Source: DT

API Key Exfiltration Through Configuration File Exploits

CVE-2026-21852 represents a particularly concerning vulnerability involving API key theft before trust confirmation. If a developer opens Claude Code in an attacker-controlled repository containing a settings file that sets ANTHROPIC_BASE_URL to an attacker-controlled endpoint, Claude Code issues API requests before displaying the trust prompt, potentially leaking the user's API keys

1

. This authenticated API traffic redirection allows attackers to capture credentials and burrow deeper into the victim's AI infrastructure

1

. The implications extend beyond individual compromise, as Anthropic's API includes Workspaces that allow multiple API keys to share access to project files stored in the cloud

3

. With a stolen key, attackers could potentially access shared project files, modify or delete cloud-stored data, upload malicious content, and generate unexpected API costs

1

.

Source: Hacker News

Source: Hacker News

MCP User Consent Bypass and Silent Command Execution

CVE-2025-59536 achieves similar exploitation goals through a different mechanism. Repository-defined configurations in .mcp.json and claude/settings.json files can be exploited to override explicit user approval prior to interacting with external tools and services through the Model Context Protocol (MCP)

1

. This bypass occurs by setting the "enableAllProjectMcpServers" option to true, allowing execution before the user grants consent and without meaningful visibility into what is being initialized

4

. Additionally, Claude Code includes automation capabilities through Hooks that allow predefined actions to run when a session begins. Check Point Research demonstrated that this mechanism could be abused to execute arbitrary shell commands automatically upon tool initialization, triggering hidden execution on a developer's machine without any additional interaction

4

.

AI Supply Chain Risk and the Evolving Threat Model

These untrusted project vulnerabilities reflect a fundamental shift in how software supply chains operate. As AI-powered tools gain the ability to execute commands, initialize external integrations, and initiate network communication autonomously, configuration files effectively become part of the execution layer

1

. What was once considered operational context now directly influences system behavior, fundamentally altering the threat model

1

. The risk is no longer limited to running untrusted code—it now extends to opening untrusted projects. In AI-driven development environments, the supply chain begins not only with source code but with the automation layers surrounding it

1

. Check Point Research emphasized that configuration files are no longer passive settings but can influence execution, networking, and permissions, requiring organizations to reassess traditional security assumptions as AI integration deepens

2

. Security controls must evolve to match the new trust boundaries created by AI-powered development tools

2

. Fortunately, all issues were resolved by Anthropic prior to public disclosure

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo