3 Sources
3 Sources
[1]
Anthropic quietly fixed flaws in its Git MCP server
Anthropic has fixed three bugs in its official Git MCP server that researchers say can be chained with other MCP tools to remotely execute malicious code or overwrite files via prompt injection. The Git MCP server, mcp-server-git, connects AI tools such as Copilot, Claude, and Cursor to Git repositories and the GitHub platform, allowing them to read repositories and code files, and automate workflows, all using natural language interactions. Agentic AI security startup Cyata found a way to exploit the vulnerabilities - a path validation bypass flaw (CVE-2025-68145), an unrestricted git_init issue (CVE-2025-68143), and an argument injection in git_diff (CVE-2025-68144) - and chain the Git MCP server with the Filesystem MCP server to achieve code execution. "Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination," Cyata security researcher Yarden Porat told The Register, adding that there's no indication that attackers exploited the bugs in the wild. "As organizations adopt more complex agentic systems with multiple tools and integrations, these combinations will multiply," Porat said. Cyata reported the three vulnerabilities to Anthropic in June, and the AI company fixed them in December. The flaws affect default deployments of mcp-server-git prior to 2025.12.18 - so make sure you're using the updated version. The Register reached out to Anthropic for this story, but the company did not respond to our inquiries. In a Tuesday report shared with The Register ahead of publication, Cyata says the issues stem from the way AI systems connect to external data sources. In 2024, Anthropic introduced the Model Context Protocol (MCP), an open standard that enables LLMs to interact with these other systems - filesystems, databases, APIs, messaging platforms, and development tools like Git. MCP servers act as the bridge between the model and external sources, providing the AI with access to the data or tools they need. As we've seen repeatedly over the past year, LLMs can be manipulated into doing things they're not supposed to do via prompt injection, which happens when attacker-controlled input causes an AI system to follow unintended instructions. It's a problem that's not going away anytime soon - and may never. There are two types: indirect and direct. Direct prompt injection happens when someone directly submits malicious input, while indirect injection happens when content contains hidden commands that AI then follows as if the user had entered them. This attack abuses the three now-fixed vulnerabilities. CVE-2025-68145: The --repository flag is supposed to restrict the MCP server to a specific repository path. However, the server didn't validate that repo_path arguments in subsequent tool calls within that configured path, thus allowing an attacker to bypass security boundaries and access any repository on the system. CVE-2025-68143: The git_init tool accepted arbitrary filesystem paths and created Git repositories without any validation, allowing any directory to be turned into a Git repository and eligible for subsequent git operations through the MCP server. To fix this, Anthropic removed the git_init tool from the server. CVE-2025-68144: The git_diff and git_checkout functions passed user-controlled arguments directly to the GitPython library without sanitization. "By injecting '--output=/path/to/file' into the 'target' field, an attacker could overwrite any file with an empty diff," and delete files, Cyata explained in the report. As Porat explained to us, the attack uses indirect prompt injection: "Your IDE reads something malicious, a README file, a webpage, a GitHub issue, somewhere the attacker has planted instructions," he said. The vulnerabilities, when combined with the Filesystem MCP server, abuse Git's smudge and clean filters, which execute shell commands defined in repository configuration files, and enable remote code execution. According to Porat, it's a four-step process: This attack illustrates how, as more AI agents move into production, security has to keep pace. "Security teams can't evaluate each MCP server in a vacuum," Porat said. "They need to assess the effective permissions of the entire agentic system, understand what tools can be chained together, and put controls in place. MCPs expand what agents can do, but they also expand the attack surface. Trust shouldn't be assumed, it needs to be verified and controlled." ®
[2]
Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution
A set of three security vulnerabilities has been disclosed in mcp-server-git, the official Git Model Context Protocol (MCP) server maintained by Anthropic, that could be exploited to read or delete arbitrary files and execute code under certain conditions. "These flaws can be exploited through prompt injection, meaning an attacker who can influence what an AI assistant reads (a malicious README, a poisoned issue description, a compromised webpage) can weaponize these vulnerabilities without any direct access to the victim's system," Cyata researcher Yarden Porat said in a report shared with The Hacker News. Mcp-server-git is a Python package and an MCP server that provides a set of built-in tools to read, search, and manipulate Git repositories programmatically via large language models (LLMs). The security issues, which have been addressed in versions 2025.9.25 and 2025.12.18 following responsible disclosure in June 2025, are listed below - * CVE-2025-68143 (CVSS score: 8.8 [v3] / 6.5 [v4]) - A path traversal vulnerability arising as a result of the git_init tool accepting arbitrary file system paths during repository creation without validation (Fixed in version 2025.9.25) * CVE-2025-68144 (CVSS score: 8.1 [v3] / 6.4 [v4]) - An argument injection vulnerability arising as a result of git_diff and git_checkout functions passing user-controlled arguments directly to git CLI commands without sanitization (Fixed in version 2025.12.18) * CVE-2025-68145 (CVSS score: 7.1 [v3] / 6.3 [v4]) - A path traversal vulnerability arising as a result of a missing path validation when using the --repository flag to limit operations to a specific repository path (Fixed in version 2025.12.18) Successful exploitation of the above vulnerabilities could allow an attacker to turn any directory on the system into a Git repository, overwrite any file with an empty diff, and access any repository on the server. In an attack scenario documented by Cyata, the three vulnerabilities could be chained with the Filesystem MCP server to write to a ".git/config" file (typically located within the hidden .git directory) and achieve remote code execution by triggering a call to git_init by means of a prompt injection. * Use git_init to create a repo in a writable directory * Use the Filesystem MCP server to write a malicious .git/config with a clean filter * Write a .gitattributes file to apply the filter to certain files * Write a shell script with the payload * Write a file that triggers the filter * Call git_add, which executes the clean filter, running the payload In response to the findings, the git_init tool has been removed from the package and adds extra validation to prevent path traversal primitives. Users of the Python package are recommended to update to the latest version for optimal protection. "This is the canonical Git MCP server, the one developers are expected to copy," Shahar Tal, CEO and co-founder of Agentic AI security company Cyata, said. "If security boundaries break down even in the reference implementation, it's a signal that the entire MCP ecosystem needs deeper scrutiny. These are not edge cases or exotic configurations, they work out of the box."
[3]
Anthropic's official Git MCP server had some worrying security flaws - this is what happened next
* Anthropic patched Git MCP flaws enabling remote code execution via tool chaining * Cyata discovered CVEs; fixed in version 2025.12.18, no exploitation reported yet * Claude previously manipulated in cyber espionage campaign targeting major global organizations Anthropic, the company behind the popular AI model Claude has fixed multiple bugs in its Git MCP server which, researchers claim, can be chained with other MCP tools to enable remote code execution (RCE) or file tampering through prompt injection. The Git MCP server is Anthropic's Model Context Protocol service that lets AI tools read and interact with Git repositories. It's important because it allows the AI to understand real codebases, or answer coding questions without unsafe or unrestricted access. The bugs were found by Agentic AI security startup Cyata, and are as follows: Path validation bypass flaw (CVE-2025-68145) Unrestricted git_init issue (CVE-2025-68143) Argument injection in git_diff (CVE-2025-68144). Fixed in December The researchers said by chaining the Git MCP server with the Filesystem MCP server, they were able to execute arbitrary code, remotely. "Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination," Cyata told The Register. "As organizations adopt more complex agentic systems with multiple tools and integrations, these combinations will multiply." Cyata reported the flaw last June, and Anthropic fixed it in December 2025, The Register says. Users should make sure they're running version 2025.12.18. So far, there is no evidence that the bugs were being exploited in the wild. Artificial Intelligence is promising major disruptions across industries. As such, businesses scramble to implement it, leaving all sorts of vulnerabilities that different cybercriminals can exploit. In mid-November 2025, Anthropic said Claude was being used, in agentic capacity, not just as an advisor, but also in executing a cyberattack itself. The company said a highly sophisticated cyber espionage campaign manipulated Anthropic's Claude Code tool in attempts to infiltrate roughly 30 global targets - primarily targeting large tech companies, government agencies, and financial institutions. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
Share
Share
Copy Link
Anthropic fixed three security vulnerabilities in its official Git MCP server that researchers say could be chained with other tools to execute malicious code or overwrite files through prompt injection. The flaws, discovered by Cyata, were patched in December 2025 after responsible disclosure in June, with no evidence of active exploitation in the wild.
Anthropic has quietly patched three security vulnerabilities in its official Git MCP server that could enable attackers to execute malicious code or manipulate files through prompt injection attacks
1
. The mcp-server-git package, which connects AI tools such as Copilot, Claude, and Cursor to Git repositories and the GitHub platform, contained flaws that researchers at Agentic AI security startup Cyata discovered could be weaponized when combined with other MCP tools2
.
Source: TechRadar
The security vulnerabilities include a path validation bypass flaw (CVE-2025-68145), an unrestricted git_init issue (CVE-2025-68143), and an argument injection in git_diff (CVE-2025-68144)
3
. Cyata reported these issues to Anthropic in June 2025, and the company addressed them in December, with fixes rolled out in versions 2025.9.25 and 2025.12.182
. Users running default deployments of mcp-server-git prior to version 2025.12.18 need to update immediately, though there's no indication that attackers exploited the bugs in the wild1
.The research reveals a troubling reality about AI security: individual components may appear secure in isolation, but combining them creates unexpected attack vectors. "Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination," Cyata security researcher Yarden Porat explained
1
. The researchers demonstrated that by chaining the Git MCP server with the Filesystem MCP server, they could achieve remote code execution through a multi-step process that abuses Git's smudge and clean filters1
.
Source: Hacker News
The CVE-2025-68145 path traversal vulnerability stemmed from inadequate validation of the --repository flag, which was supposed to restrict the MCP server to a specific repository path. However, the server failed to validate that repo_path arguments in subsequent tool calls remained within that configured path, allowing attackers to bypass security boundaries and access any Git repository on the system
1
. The CVE-2025-68143 flaw involved the git_init tool accepting arbitrary filesystem paths without validation, enabling any directory to be converted into a Git repository eligible for subsequent operations2
. Anthropic's fix involved removing the git_init tool entirely from the server1
.The argument injection vulnerability (CVE-2025-68144) affected the git_diff and git_checkout functions, which passed user-controlled arguments directly to the GitPython library without sanitization. By injecting '--output=/path/to/file' into the 'target' field, an attacker could overwrite any file with an empty diff or delete files
1
.Related Stories
The attack vector relies on indirect prompt injection, where AI systems can be manipulated into following unintended instructions embedded in content they process. "Your IDE reads something malicious, a README file, a webpage, a GitHub issue, somewhere the attacker has planted instructions," Porat described
1
. This means an attacker who can influence what an AI assistant reads—through a malicious README, a poisoned issue description, or a compromised webpage—can weaponize these security vulnerabilities without any direct access to the victim's system2
.The Model Context Protocol, introduced by Anthropic in 2024, serves as an open standard enabling LLMs to interact with external systems including filesystems, databases, APIs, messaging platforms, and development tools like Git
1
. MCP servers act as bridges between models and external sources, providing AI with access to necessary data or tools."This is the canonical Git MCP server, the one developers are expected to copy," said Shahar Tal, CEO and co-founder of Cyata. "If security boundaries break down even in the reference implementation, it's a signal that the entire MCP ecosystem needs deeper scrutiny. These are not edge cases or exotic configurations, they work out of the box"
2
. The discovery highlights how the attack surface expands as organizations adopt more complex agentic systems with multiple tools and integrations1
.Porat emphasized that security teams cannot evaluate each MCP server in a vacuum. They need to assess the effective permissions of the entire agentic system, understand what tools can be chained together, and implement appropriate controls. Trust shouldn't be assumed—it needs to be verified and controlled
1
. This incident follows previous security concerns, including a November 2025 cyber espionage campaign that manipulated Claude Code tool in attempts to infiltrate roughly 30 global targets, primarily large tech companies, government agencies, and financial institutions3
.Summarized by
Navi
[1]
02 Aug 2025•Technology

11 Jul 2025•Technology

01 May 2025•Technology

1
Policy and Regulation

2
Technology

3
Technology
