Anthropic patches critical Git MCP server flaws that enabled remote code execution via AI tools

3 Sources

Share

Anthropic fixed three security vulnerabilities in its official Git MCP server that researchers say could be chained with other tools to execute malicious code or overwrite files through prompt injection. The flaws, discovered by Cyata, were patched in December 2025 after responsible disclosure in June, with no evidence of active exploitation in the wild.

Anthropic Addresses Critical Security Vulnerabilities in Git MCP Server

Anthropic has quietly patched three security vulnerabilities in its official Git MCP server that could enable attackers to execute malicious code or manipulate files through prompt injection attacks

1

. The mcp-server-git package, which connects AI tools such as Copilot, Claude, and Cursor to Git repositories and the GitHub platform, contained flaws that researchers at Agentic AI security startup Cyata discovered could be weaponized when combined with other MCP tools

2

.

Source: TechRadar

Source: TechRadar

The security vulnerabilities include a path validation bypass flaw (CVE-2025-68145), an unrestricted git_init issue (CVE-2025-68143), and an argument injection in git_diff (CVE-2025-68144)

3

. Cyata reported these issues to Anthropic in June 2025, and the company addressed them in December, with fixes rolled out in versions 2025.9.25 and 2025.12.18

2

. Users running default deployments of mcp-server-git prior to version 2025.12.18 need to update immediately, though there's no indication that attackers exploited the bugs in the wild

1

.

How Tool Chaining Amplifies Risk in Agentic AI Systems

The research reveals a troubling reality about AI security: individual components may appear secure in isolation, but combining them creates unexpected attack vectors. "Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination," Cyata security researcher Yarden Porat explained

1

. The researchers demonstrated that by chaining the Git MCP server with the Filesystem MCP server, they could achieve remote code execution through a multi-step process that abuses Git's smudge and clean filters

1

.

Source: Hacker News

Source: Hacker News

Understanding the Path Traversal Vulnerability and Argument Injection Flaws

The CVE-2025-68145 path traversal vulnerability stemmed from inadequate validation of the --repository flag, which was supposed to restrict the MCP server to a specific repository path. However, the server failed to validate that repo_path arguments in subsequent tool calls remained within that configured path, allowing attackers to bypass security boundaries and access any Git repository on the system

1

. The CVE-2025-68143 flaw involved the git_init tool accepting arbitrary filesystem paths without validation, enabling any directory to be converted into a Git repository eligible for subsequent operations

2

. Anthropic's fix involved removing the git_init tool entirely from the server

1

.

The argument injection vulnerability (CVE-2025-68144) affected the git_diff and git_checkout functions, which passed user-controlled arguments directly to the GitPython library without sanitization. By injecting '--output=/path/to/file' into the 'target' field, an attacker could overwrite any file with an empty diff or delete files

1

.

Prompt Injection Remains a Persistent Threat to LLMs

The attack vector relies on indirect prompt injection, where AI systems can be manipulated into following unintended instructions embedded in content they process. "Your IDE reads something malicious, a README file, a webpage, a GitHub issue, somewhere the attacker has planted instructions," Porat described

1

. This means an attacker who can influence what an AI assistant reads—through a malicious README, a poisoned issue description, or a compromised webpage—can weaponize these security vulnerabilities without any direct access to the victim's system

2

.

The Model Context Protocol, introduced by Anthropic in 2024, serves as an open standard enabling LLMs to interact with external systems including filesystems, databases, APIs, messaging platforms, and development tools like Git

1

. MCP servers act as bridges between models and external sources, providing AI with access to necessary data or tools.

Implications for AI Security as Agentic Systems Expand

"This is the canonical Git MCP server, the one developers are expected to copy," said Shahar Tal, CEO and co-founder of Cyata. "If security boundaries break down even in the reference implementation, it's a signal that the entire MCP ecosystem needs deeper scrutiny. These are not edge cases or exotic configurations, they work out of the box"

2

. The discovery highlights how the attack surface expands as organizations adopt more complex agentic systems with multiple tools and integrations

1

.

Porat emphasized that security teams cannot evaluate each MCP server in a vacuum. They need to assess the effective permissions of the entire agentic system, understand what tools can be chained together, and implement appropriate controls. Trust shouldn't be assumed—it needs to be verified and controlled

1

. This incident follows previous security concerns, including a November 2025 cyber espionage campaign that manipulated Claude Code tool in attempts to infiltrate roughly 30 global targets, primarily large tech companies, government agencies, and financial institutions

3

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo