IDEsaster research reveals 30+ critical flaws in AI coding tools enabling data theft and RCE

Reviewed byNidhi Govil

2 Sources

Share

Security researchers have uncovered over 30 vulnerabilities across every major AI-powered IDE tested, including GitHub Copilot, Cursor, and Visual Studio Code. The IDEsaster findings reveal how AI agents can be manipulated through prompt injection to leak sensitive data or execute malicious code, with 24 CVEs assigned and 100% of tested tools affected.

Critical Security Vulnerabilities Affect Every AI-Powered IDE Tested

A six-month investigation into AI coding tools has exposed a sweeping security crisis affecting every major development platform. Security researcher Ari Marzouk uncovered over 30 security vulnerabilities that enable data exfiltration and remote code execution across AI-assisted development tools, with 100% of tested platforms found vulnerable

1

2

. The research, dubbed IDEsaster, affects widely used AI-powered Integrated Development Environments including GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least 24 CVEs have been assigned, with additional advisories issued by AWS

1

.

Source: Hacker News

Source: Hacker News

How AI Agents Transform Legacy IDE Features Into Attack Surfaces

The core problem stems from a fundamental mismatch between traditional IDE architecture and autonomous AI capabilities. Visual Studio Code, JetBrains products, Zed, and other platforms were designed decades before AI agents existed, never anticipating components capable of autonomously reading, editing, and generating files. "All AI IDEs effectively ignore the base software in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives," Marzouk told The Hacker News

1

. This disconnect creates an IDE-agnostic attack chain that works across platforms sharing similar base software layers.

Universal Attack Chain Exploits Prompt Injection and Context Hijacking

The IDEsaster attack chain follows a three-stage pattern common to AI coding assistants. It begins with context hijacking via prompt injection, where hidden instructions are planted in configuration files, READMEs, file names, or outputs from malicious Model Context Protocol servers

2

. These prompt injection primitives can include user-added context references with invisible characters that humans cannot see but large language models parse readily. Once an AI agent processes that poisoned context, its tools can be directed to perform seemingly legitimate actions that trigger unsafe behaviors in the underlying IDE. The final stage abuses built-in features to extract sensitive information or execute attacker-controlled code

1

.

Real-World Exploits Demonstrate Data Theft Through Legitimate Features

One documented exploit involves writing a JSON file that references a remote schema. The IDE automatically fetches that schema, inadvertently leaking parameters embedded by the AI agent, including sensitive data collected earlier in the attack chain. Visual Studio Code, JetBrains IDEs, and Zed all exhibited this behavior, with even developer safeguards like diff previews failing to suppress the outbound request

1

. Another case study demonstrates full remote code execution through manipulated IDE settings. By editing an executable file already present in the workspace and modifying configuration fields such as php.validate.executablePath, attackers can cause the IDE to immediately run arbitrary code when a related file type is opened or created

1

.

Source: Tom's Hardware

Source: Tom's Hardware

Enterprise Environments Face Expanded Attack Surface From Agentic AI

As agentic AI tools gain traction in enterprise development workflows, these findings reveal how AI components fundamentally expand the attack surface of development machines. The vulnerabilities exploit an LLM's inability to distinguish between legitimate user instructions and malicious content ingested from external sources. "Any repository using AI for issue triage, PR labeling, code suggestions, or automated replies is at risk of prompt injection, command injection, secret exfiltration, repository compromise and upstream supply chain compromise," warned Aikido researcher Rein Daelman

2

. The auto-approve behavior for in-workspace file writes, enabled by default in many AI coding assistants, allows arbitrary code execution without user interaction or workspace reopening

2

.

Secure for AI Principle Emerges as Long-Term Solution

Marzouk emphasizes that short-term fixes cannot eliminate this vulnerability class because current IDEs were not built under what he calls the "Secure for AI" principle. This new paradigm ensures products are conceived with AI component abuse in mind from the start, going beyond secure by default and secure by design approaches

1

2

. Recommended mitigations include applying the principle of least privilege to LLM tools, minimizing prompt injection vectors, hardening system prompts, using sandboxing to run commands, and performing security testing for path traversal, information leakage, and command injection

2

. The long-term fix requires fundamentally redesigning how IDEs allow AI agents to read, write, and act inside projects, addressing the threat model gap that currently treats legacy features as inherently safe despite their new autonomous context

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo