4 Sources
[1]
Vibe coding tool Cursor allows persistent code execution
Check Point researchers uncovered a remote code execution bug in popular vibe-coding AI tool Cursor that could allow an attacker to poison developer environments by secretly modifying a previously approved Model Context Protocol (MCP) configuration, silently swapping it for a malicious command without any user prompt. The good news: Cursor released an update (version 1.3) on July 29 that fixes the issue and requires user approval every time an MCP Server entry is modified. So if you use the AI-powered code editor, update to run the latest version and ensure you're not giving miscreants complete access to your machine every time you open Cursor. While Cursor addressed the flaw, Check Point thinks the vulnerability highlights a major AI supply chain risk. "The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows," the security shop's research team wrote in a Tuesday blog. MCP is an open-source protocol that Anthropic introduced in November 2024 to allow AI-based systems, like agents and large language models (LLMs), to connect to external data sources and interact with each other. While MCP does make those processes easier, it also opens the door to a whole new attack surface and related security threats, which researchers have had fun poking holes in since its rollout. Cursor is an AI integrated development environment (IDE) that uses LLMs to help write and debug code - and it also requires a certain level of trust, especially in multi-user environments using shared code, configuration files and AI-based plugins. "We set out to evaluate whether the trust and validation model for MCP execution in Cursor properly accounted for changes over time, especially in cases where a previously approved configuration is later modified," Check Point researchers Andrey Charikov, Roman Zaikin and Oded Vanunu said in a technical write-up also published Tuesday. "In collaborative development scenarios, such changes are common - and any gaps in validation could lead to command injection, code execution, or persistent compromise," the trio added. And as you can probably guess, the researchers did find such a validation gap and showed how it could be abused by altering an already-approved MCP server configuration to trigger malicious code execution every time a project is opened in Cursor. The team dubbed the vuln "MCPoison", and it essentially boils down to Cursor's one-time approval for MCP configurations. Once Cursor approves an initial configuration, it trusts all future modifications without requiring any new validation. An attacker could easily exploit this trust by adding a benign MCP configuration with a harmless command to a shared repository, waiting for someone to approve it, and then later changing the same entry so it executes a malicious command, which will then be executed silently on the victim's machine every time Cursor is reopened. The Check Point team also published a proof-of-concept demonstrating this type of persistent remote code execution by first getting a non-malicious MCP command approved and then replacing it with a reverse-shell payload, thus gaining access to the victim's machine every time they open the Cursor project. This vulnerability disclosure is just the first in a series of flaws that Check Point researchers uncovered in developer-focused AI platforms, we're told. "As AI-assisted coding tools and LLM-integrated environments continue to shape modern software workflows, CPR will publish further findings that highlight overlooked risks and help raise the security bar across this emerging ecosystem," the trio wrote. So stay tuned for more fun with AI tools coming soon. ®
[2]
Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval
Cybersecurity researchers have disclosed a high-severity security flaw in the artificial intelligence (AI)-powered code editor Cursor that could result in remote code execution. The vulnerability, tracked as CVE-2025-54136 (CVSS score: 7.2), has been codenamed MCPoison by Check Point Research, owing to the fact that it exploits a quirk in the way the software handles modifications to Model Context Protocol (MCP) server configurations. "A vulnerability in Cursor AI allows an attacker to achieve remote and persistent code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or editing the file locally on the target's machine," Cursor said in an advisory released last week. "Once a collaborator accepts a harmless MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) without triggering any warning or re-prompt." MCP is an open-standard developed by Anthropic that allows large language models (LLMs) to interact with external tools, data, and services in a standardized manner. It was introduced by the AI company in November 2024. CVE-2025-54136, per Check Point, has to do with how it's possible for an attacker to alter the behavior of an MCP configuration after a user has approved it within Cursor. Specifically, it unfolds as follows - * Add a benign-looking MCP configuration (".cursor/rules/mcp.json") to a shared repository * Wait for the victim to pull the code and approve it once in Cursor * Replace the MCP configuration with a malicious payload, e.g., launch a script or run a backdoor * Achieve persistent code execution every time the victim opens the Cursor The fundamental problem here is that once a configuration is approved, it's trusted by Cursor indefinitely for future runs, even if it has been changed. Successful exploitation of the vulnerability not only exposes organizations to supply chain risks, but also opens the door to data and intellectual property theft without their knowledge. Following responsible disclosure on July 16, 2025, the issue has been addressed by Cursor in version 1.3 released late July 2025 by requiring user approval every time an entry in the MCP configuration file is modified. "The flaw exposes a critical weakness in the trust model behind AI-assisted development environments, raising the stakes for teams integrating LLMs and automation into their workflows," Check Point said. The development comes days after Aim Labs, Backslash Security, and HiddenLayer exposed multiple weaknesses in the AI tool that could have been abused to obtain remote code execution and bypass its denylist-based protections. They have also been patched in version 1.3. The findings also coincide with the growing adoption of AI in business workflows, including using LLMs for code generation, broadening the attack surface to various emerging risks like AI supply chain attacks, unsafe code, model poisoning, prompt injection, hallucinations, inappropriate responses, and data leakage - * A test of over 100 LLMs for their ability to write Java, Python, C#, and JavaScript code has found that 45% of the generated code samples failed security tests and introduced OWASP Top 10 security vulnerabilities. Java led with a 72% security failure rate, followed by C# (45%), JavaScript (43%), and Python (38%). * An attack called LegalPwn has revealed that it's possible to leverage legal disclaimers, terms of service, or privacy policies as a novel prompt injection vector, highlighting how malicious instructions can be embedded within legitimate, but often overlooked, textual components to trigger unintended behavior in LLMs, such as misclassifying malicious code as safe and offering unsafe code suggestions that can execute a reverse shell on the developer's system. * An attack called man-in-the-prompt that employs a rogue browser extension with no special permissions to open a new browser tab in the background, launch an AI chatbot, and inject them with malicious prompts to covertly extract data and compromise model integrity. This takes advantage of the fact that any browser add-on with scripting access to the Document Object Model (DOM) can read from, or write to, the AI prompt directly. * A jailbreak technique called Fallacy Failure that manipulates an LLM into accepting logically invalid premises and causes it to produce otherwise restricted outputs, thereby deceiving the model into breaking its own rules. * An attack called MAS hijacking that manipulates the control flow of a multi-agent system (MAS) to execute arbitrary malicious code across domains, mediums, and topologies by weaponizing the agentic nature of AI systems. * A technique called Poisoned GPT-Generated Unified Format (GGUF) Templates that targets the AI model inference pipeline by embedding malicious instructions within the chat template files that execute during the inference phase to compromise outputs. By positioning the attack between input validation and model output, the approach is both sneaky and bypasses AI guardrails. With GGUF files distributed via services like Hugging Face, the attack exploits the supply chain trust model to trigger the attack. * An attacker can target the machine learning (ML) training environments like MLFlow, Amazon SageMaker, and Azure ML to compromise the confidentiality, integrity and availability of the models, ultimately leading to lateral movement, privilege escalation, as well as training data and model theft and poisoning. * A study by Anthropic has uncovered that LLMs can learn hidden characteristics during distillation, a phenomenon called subliminal learning, that causes models to transmit behavioral traits through generated data that appears completely unrelated to those traits, potentially leading to misalignment and harmful behavior. "As Large Language Models become deeply embedded in agent workflows, enterprise copilots, and developer tools, the risk posed by these jailbreaks escalates significantly," Pillar Security's Dor Sarig said. "Modern jailbreaks can propagate through contextual chains, infecting one AI component and leading to cascading logic failures across interconnected systems." "These attacks highlight that AI security requires a new paradigm, as they bypass traditional safeguards without relying on architectural flaws or CVEs. The vulnerability lies in the very language and reasoning the model is designed to emulate."
[3]
AI-powered Cursor IDE vulnerable to prompt-injection attacks
A vulnerability that researchers call CurXecute is present in almost all versions of the AI-powered code editor Cursor, and can be exploited to execute remote code with developer privileges. The security issue is now identified as CVE-2025-54135 and can be leveraged by feeding the AI agent a malicious prompt to trigger attacker-control commands. The Cursor integrated development environment (IDE) relies on AI agents to help developers code faster and more efficiently, allowing them to connect with external resources and systems using the Model Context Protocol (MCP). According to the researchers, a hacker successfully exploiting the CurXecute vulnerability could open the door to ransomware and data theft incidents. CurXecute is similar to the EchoLeak vulnerability in Microsoft 365 CoPilot that could be used to steal sensitive data without any user interaction. After discovering and understanding EchoLeak, the researchers at Aim Security, an AI cybersecurity company, learned that even local AI agent could be influenced by an external factor for malicious actions. Cursor IDE has support for the MCP open-standard framework, which extends an agent's capabilities and context by allowing it to connect to external data sources and tools. However, the researchers warn that this can compromise the agent as it is exposed to external, untrusted data that can affect its control flow. A hacker could leverage this to hijack the agents session and privileges to act on behalf of the user. By using an externally-hosted prompt injection, an attacker could rewrite the ~/.cursor/mcp.json file in the project directory to enable remote execution of arbitrary commands. The researchers explain that Cursor does not require confirmation for executing new entries to the ~/.cursor/mcp.json file and that suggested edits to are live and trigger the execution of the command even if the user rejects them. In a report shared with BleepingComputer, Aim Security says that adding to Cursor a standard MCP server, such as Slack, could expose the agent to untrusted data. An attacker could post to a public channel a malicious prompt with an injection payload for the mcp.json configuration file. When the victim opens the new chat and instructs the agent to summarize the messages, the payload, which could be a shell, lands on the disk immediately without the user's approval. Aim Security researchers say that a CurXecute attack may lead to ransomware and data theft incidents, or even AI manipulation through hallucination that can ruin the project, or enable slopsquatting attacks. The researchers reported CurXecute privately to Cursor on July 7 and the next day the vendor merged a patch into the main branch. On July 29, Cursor version 1.3 was released with multiple improvements and a fix for CurXecute. Cursor also published a security advisory for CVE-2025-54135, which received a medium-severity score of 8.6. Users are recommended to download and install the latest version of Cursor to avoid known security risks.
[4]
Cursor AI Code Editor Fixed Flaw Allowing Attackers to Run Commands via Prompt Injection
Cybersecurity researchers have disclosed a now-patched, high-severity security flaw in Cursor, a popular artificial intelligence (AI) code editor, that could result in remote code execution. The vulnerability, tracked as CVE-2025-54135 (CVSS score: 8.6), has been addressed in version 1.3 released on July 29, 2025. It has been codenamed CurXecute by Aim Labs, which previously disclosed EchoLeak. "Cursor runs with developer‑level privileges, and when paired with an MCP server that fetches untrusted external data, that data can redirect the agent's control flow and exploit those privileges," the Aim Labs Team said in a report shared with The Hacker News. "By feeding poisoned data to the agent via MCP, an attacker can gain full remote code execution under the user privileges, and achieve any number of things, including opportunities for ransomware, data theft, AI manipulation and hallucinations, etc." In other words, the remote code execution triggered by a single externally‑hosted prompt‑injection that silently rewrites the "~/.cursor/mcp.json" file and runs attacker‑controlled commands. The vulnerability is similar to EchoLeak in that the tools, which are exposed by Model Control Protocol (MCP) servers for use by AI models and facilitate interaction with external systems, such as querying databases or invoking APIs, could fetch untrusted data that can poison the agent's expected behavior. Specifically, Aim Security found that the mcp.json file used to configure custom MCP servers in Cursor can trigger the execution of any new entry (e.g., adding a Slack MCP server) without requiring any confirmation. This auto-run mode is particularly dangerous because it can lead to the automatic execution of a malicious payload that's injected by the attacker via a Slack message. The attack sequence proceeds as follows - * User adds Slack MCP server via Cursor UI * Attacker posts message in a public Slack channel with the command injection payload * Victim opens a new chat and asks Cursor's agent to use the newly configured Slack MCP server to summarize their messages in a prompt: "Use Slack tools to summarize my messages" * The agent encounters a specially crafted message designed to inject malicious commands to its context "The core cause of the flaw is that new entries to the global MCP JSON file are starting automatically," Aim Security said. "Even if the edit is rejected, the code execution had already happened." The entire attack is noteworthy for its simplicity. But it also highlights how AI-assisted tools can open up new attack surfaces when processing external content, in this case, any third-party MCP server. "As AI agents keep bridging external, internal, and interactive worlds, security models must assume external context may affect the agent runtime - and monitor every hop," the company added. Version 1.3 of Cursor also addresses another issue with auto-run mode that can easily circumvent the platform's denylist-based protections using methods like Base64-encoding, shell scripts, and enclosing shell commands within quotes (e.g., "e"cho bypass) to execute unsafe commands. Following responsible disclosure by the BackSlash Research Team, Cursor has taken the step of altogether deprecating the denylist feature for auto-run in favor of an allowlist. "Don't expect the built-in security solutions provided by vibe coding platforms to be comprehensive or foolproof," researchers Mustafa Naamneh and Micah Gold said. "The onus is on end-user organizations to ensure agentic systems are equipped with proper guardrails." The disclosure comes as HiddenLayer also found that Cursor's ineffective denylist approach can be weaponized by embedding hidden malicious instructions with a GitHub README.md file, allowing an attacker to steal API keys, SSH credentials, and even run blocked system commands. "When the victim viewed the project on GitHub, the prompt injection was not visible, and they asked Cursor to git clone the project and help them set it up, a common occurrence for an IDE-based agentic system," researchers Kasimir Schulz, Kenneth Yeung, and Tom Bonner noted. "However, after cloning the project and reviewing the readme to see the instructions to set up the project, the prompt injection took over the AI model and forced it to use the grep tool to find any keys in the user's workspace before exfiltrating the keys with curl." HiddenLayer said it also found additional weaknesses that could be abused to leak Cursor's system prompt by overriding the base URL provided for OpenAI API requests to a proxied model, as well as exfiltrate a user's private SSH keys by leveraging two benign tools, read_file and create_diagram, in what's called a tool combination attack. This essentially involves inserting a prompt injection command within a GitHub README.md file that's parsed by Cursor when the victim user asks the code editor to summarize the file, resulting in the execution of the command. The hidden instruction, for its part, uses the read_file tool to read private SSH keys belonging to the user and then utilizes the create_diagram tool to exfiltrate the keys to an attacker-controlled webhook.site URL. All the identified shortcomings have been remediated by Cursor in version 1.3. News of various vulnerabilities in Cursor comes as Tracebit devised an attack targeting Google's Gemini CLI, an open-source command-line tool fine-tuned for coding tasks, that exploited a default configuration of the tool to surreptitiously exfiltrate sensitive data to an attacker-controlled server using curl. Like observed in the case of Cursor, the attack requires the victim to (1) instruct Gemini CLI to interact with an attacker-created GitHub codebase containing a nefarious indirect prompt injection in the GEMINI.md context file and (2) add a benign command to an allowlist (e.g., grep). "Prompt injection targeting these elements, together with significant validation and display issues within Gemini CLI could cause undetectable arbitrary code execution," Tracebit founder and CTO Sam Cox said. To mitigate the risk posed by the attack, Gemini CLI users are advised to upgrade their installations to version 0.1.14 shipped on July 25, 2025.
Share
Copy Link
Multiple security flaws discovered in the AI-powered code editor Cursor, including a high-severity vulnerability that could lead to remote code execution, highlighting potential risks in AI-assisted development tools.
Cybersecurity researchers have uncovered a series of high-severity vulnerabilities in Cursor, a popular AI-powered code editor. The most critical flaw, dubbed "MCPoison" (CVE-2025-54136), could allow attackers to achieve remote code execution by exploiting the way Cursor handles Model Context Protocol (MCP) server configurations 1.
Source: The Register
The MCPoison vulnerability stems from Cursor's one-time approval process for MCP configurations. Once an initial configuration is approved, Cursor trusts all future modifications without requiring new validation. This trust model can be exploited by attackers to silently swap a benign MCP command with a malicious payload, potentially gaining persistent access to a victim's machine 2.
Source: The Hacker News
Researchers also identified another vulnerability called "CurXecute" (CVE-2025-54135), which allows attackers to execute remote code with developer privileges by feeding the AI agent a malicious prompt. This flaw could potentially lead to ransomware attacks, data theft, and AI manipulation 3.
Attackers could exploit these vulnerabilities through various methods:
Source: Bleeping Computer
These vulnerabilities highlight the potential risks associated with AI-powered development tools. As AI agents bridge external, internal, and interactive worlds, security models must account for how external context can affect agent runtime 4.
Cursor has addressed these vulnerabilities in version 1.3, released on July 29, 2025. Key improvements include:
The discovery of these vulnerabilities has raised concerns about the security of AI-assisted coding tools. Check Point Research warns that this is just the first in a series of flaws they've uncovered in developer-focused AI platforms, suggesting that more security issues may come to light in the near future 1.
As AI continues to shape modern software workflows, cybersecurity researchers emphasize the need for robust security measures and thorough vetting of AI-powered development tools to mitigate potential risks and protect sensitive data and intellectual property.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
2 hrs ago
9 Sources
Technology
2 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
18 hrs ago
7 Sources
Technology
18 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
10 hrs ago
6 Sources
Technology
10 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
18 hrs ago
2 Sources
Technology
18 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
10 hrs ago
3 Sources
Health
10 hrs ago