9 Sources
[1]
Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot
A new attack dubbed 'EchoLeak' is the first known zero-click AI vulnerability that enables attackers to exfiltrate sensitive data from Microsoft 365 Copilot from a user's context without interaction. The attack was devised by Aim Labs researchers in January 2025, who reported their findings to Microsoft. The tech giant assigned the CVE-2025-32711 identifier to the information disclosure flaw, rating it critical, and fixed it server-side in May, so no user action is required. Also, Microsoft noted that there's no evidence of any real-world exploitation, so this flaw impacted no customers. Microsoft 365 Copilot is an AI assistant built into Office apps like Word, Excel, Outlook, and Teams that uses OpenAI's GPT models and Microsoft Graph to help users generate content, analyze data, and answer questions based on their organization's internal files, emails, and chats. Though fixed and never maliciously exploited, EchoLeak holds significance for demonstrating a new class of vulnerabilities called 'LLM Scope Violation,' which causes a large language model (LLM) to leak privileged internal data without user intent or interaction. As the attack requires no interaction with the victim, it can be automated to perform silent data exfiltration in enterprise environments, highlighting how dangerous these flaws can be when deployed against AI-integrated systems. The attack begins with a malicious email sent to the target, containing text unrelated to Copilot and formatted to look like a typical business document. The email embeds a hidden prompt injection crafted to instruct the LLM to extract and exfiltrate sensitive internal data. Because the prompt is phrased like a normal message to a human, it bypasses Microsoft's XPIA (cross-prompt injection attack) classifier protections. Later, when the user asks Copilot a related business question, the email is retrieved into the LLM's prompt context by the Retrieval-Augmented Generation (RAG) engine due to its formatting and apparent relevance. The malicious injection, now reaching the LLM, "tricks" it into pulling sensitive internal data and inserting it into a crafted link or image. Aim Labs found that some markdown image formats cause the browser to request the image, which sends the URL automatically, including the embedded data, to the attacker's server. Microsoft CSP blocks most external domains, but Microsoft Teams and SharePoint URLs are trusted, so these can be abused to exfiltrate data without problem. EchoLeak may have been fixed, but the increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defenses. The same trend is bound to create new weaponizable flaws adversaries can stealthily exploit for high-impact attacks. It is important for enterprises to strengthen their prompt injection filters, implement granular input scoping, and apply post-processing filters on LLM output to block responses that contain external links or structured data. Moreover, RAG engines can be configured to exclude external communications to avoid retrieving malicious prompts in the first place.
[2]
Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction
A novel attack technique named EchoLeak has been characterized as a "zero-click" artificial intelligence (AI) vulnerability that allows bad actors to exfiltrate sensitive data from Microsoft 365 Copilot's context sans any user interaction. The critical-rated vulnerability has been assigned the CVE identifier CVE-2025-32711 (CVSS score: 9.3). It requires no customer action and has been already addressed by Microsoft. There is no evidence that the shortcoming was exploited maliciously in the wild. "AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network," the company said in an advisory released Wednesday. It has since been added to Microsoft's Patch Tuesday list for June 2025, taking the total number of fixed flaws to 68. Aim Security, which discovered and reported the issue, said it's an instance of large language model (LLM) Scope Violation that paves the way for indirect prompt injection, leading to unintended behavior. LLM Scope Violation occurs when an attacker's instructions embedded in untrusted content, e.g., an email sent from outside an organization, successfully tricks the AI system into accessing and processing privileged internal data without explicit user intent or interaction. "The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior," the Israeli cybersecurity company said. "The result is achieved despite M365 Copilot's interface being open only to organization employees." The attack sequence unfolds as follows - "As a zero-click AI vulnerability, EchoLeak opens up extensive opportunities for data exfiltration and extortion attacks for motivated threat actors," Aim Security said. "In an ever-evolving agentic world, it showcases the potential risks that are inherent in the design of agents and chatbots." "The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context - and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations." MCP and Advanced Tool Poisoning The disclosure comes as CyberArk disclosed a tool poisoning attack (TPA) that affects the Model Context Protocol (MCP) standard and goes beyond the tool description to extend it across the entire tool schema. The attack technique has been codenamed Full-Schema Poisoning (FSP). "While most of the attention around tool poisoning attacks has focused on the description field, this vastly underestimates the other potential attack surface," security researcher Simcha Kosman said. "Every part of the tool schema is a potential injection point, not just the description." The cybersecurity company said the problem is rooted in MCP's "fundamentally optimistic trust model" that equates syntactic correctness to semantic safety and assumes that LLMs only reason over explicitly documented behaviors. What's more, TPA and FSP could be weaponized to stage advanced tool poisoning attacks (ATPA), wherein the attacker designs a tool with a benign description but displays a fake error message that tricks the LLM into accessing sensitive data (e.g., SSH keys) in order to address the purported issue. "As LLM agents become more capable and autonomous, their interaction with external tools through protocols like MCP will define how safely and reliably they operate," Kosman said. "Tool poisoning attacks -- especially advanced forms like ATPA -- expose critical blind spots in current implementations." That's not all. Given that MCP enables AI agents (or assistants) to interact with various tools, services, and data sources in a consistent manner, any vulnerability in the MCP client-server architecture could pose serious security risks, including manipulating an agent into leaking data or executing malicious code. This is evidenced in a recently disclosed critical security flaw in the popular GitHub MCP integration, which, if successfully exploited, could allow an attacker to hijack a user's agent via a malicious GitHub issue, and coerce it into leaking data from private repositories when the user prompts the model to "take a look at the issues." "The issue contains a payload that will be executed by the agent as soon as it queries the public repository's list of issues," Invariant Labs researchers Marco Milanta and Luca Beurer-Kellner said, categorizing it as a case of a toxic agent flow. That said, the vulnerability cannot be addressed by GitHub alone through server-side patches, as it's more of a "fundamental architectural issue," necessitating that users implement granular permission controls to ensure that the agent has access to only those repositories it needs to interact with and continuously audit interactions between agents and MCP systems. Make Way for the MCP Rebinding Attack The rapid ascent of MCP as the "connective tissue for enterprise automation and agentic applications" has also opened up new attack avenues, such as Domain Name System (DNS) rebinding, to access sensitive data by exploiting Server-Sent Events (SSE), a protocol used by MCP servers for real-time streaming communication to the MCP clients. DNS rebinding attacks entail tricking a victim's browser into treating an external domain as if it belongs to the internal network (i.e., localhost). These attacks, which are engineered to circumvent same-origin policy (SOP) restrictions, are triggered when a user visits a malicious site set up by the attacker via phishing or social engineering. "There is a disconnect between the browser security mechanism and networking protocols," GitHub's Jaroslav Lobacevski said in an explainer on DNS rebinding published this week. "If the resolved IP address of the web page host changes, the browser doesn't take it into account and treats the webpage as if its origin didn't change. This can be abused by attackers" This behavior essentially allows client-side JavaScript from a malicious site to bypass security controls and target other devices on the victim's private network that are not exposed to the public internet. The MCP rebinding attack takes advantage of an adversary-controlled website's ability to access internal resources on the victim's local network so as to interact with the MCP server running on localhost over SSE and ultimately exfiltrate confidential data. "By abusing SSE's long-lived connections, attackers can pivot from an external phishing domain to target internal MCP servers," the Straiker AI Research (STAR) team said in an analysis published last month. It's worth noting that SSE has been deprecated as of November 2024 in favor of Streamable HTTP owing to the risks posed by DNS rebinding attacks. To mitigate the threat of such attacks, it's advised to enforce authentication on MCP Servers and validate the "Origin" header on all incoming connections to the MCP server to ensure that the requests are coming from trusted sources.
[3]
Microsoft fixes first known zero-click attack on an AI agent
TL;DR: Microsoft has patched a critical zero-click vulnerability in Copilot that allowed remote attackers to automatically exfiltrate sensitive user data simply by sending an email. Dubbed "EchoLeak," the security flaw is being described by cybersecurity researchers as the first known zero-click attack targeting an AI assistant. EchoLeak affected Microsoft 365 Copilot, the AI assistant integrated across several Office applications, including Word, Excel, Outlook, PowerPoint, and Teams. According to researchers at Aim Security, who discovered the vulnerability, the exploit allowed attackers to access sensitive information from apps and data sources connected to Copilot without any user interaction. Alarmingly, the malicious email did not contain any phishing links or malware attachments. Instead, the attack leveraged a novel technique known as LLM Scope Violation, which manipulates the internal logic of large language models to turn the AI agent against itself. Researchers warn that this approach could be used to compromise other Retrieval-Augmented Generation chatbots and AI agents in the future. Because it targets fundamental design flaws in how these systems manage context and data access, even advanced platforms such as Anthropic's Model Context Protocol and Salesforce's Agentforce could be vulnerable. Aim Security discovered the flaw in January and promptly reported it to the Microsoft Security Response Center. However, the company took nearly five months to resolve the issue, a timeline that co-founder and CTO Adir Gruss described as on the "very high side of something like this." Microsoft reportedly had a hotfix ready by April, but the patch was delayed after engineers uncovered additional vulnerabilities in May. The company initially attempted to contain EchoLeak by blocking its pathways across affected apps, but those efforts failed due to the unpredictable behavior of AI and the vast attack surface it presents. Following the final update, Microsoft issued a statement thanking Aim Security for responsibly disclosing the issue and confirmed that it had been fully mitigated. The fix was automatically applied to all impacted products and requires no action from end users. Although there are no known cases of EchoLeak being exploited in the wild, many Fortune 500 companies are reportedly "super afraid" and now re-evaluating their strategies for deploying AI agents across enterprise environments. According to Gruss, the industry needs to implement robust guardrails to prevent similar incidents in the future. In the meantime, Aim Security is providing interim mitigations to clients using AI agents potentially vulnerable to the same class of attack. But Gruss believes a long-term solution will require a fundamental redesign of how AI agents are built and deployed.
[4]
Microsoft Copilot targeted in first "zero-click" attack on an AI agent - what you need to know
Microsoft says it has fixed the issue server-side, but users should be on guard Microsoft has fixed a dangerous zero-click attack in its Generative Artificial Intelligence (GenAI) model which could have allowed threat actors to silently exfiltrate sensitive corporate data without (almost) any user interaction. Cybersecurity researchers Aim Labs, who found the flaw, known as an "LLM Scope Violation", and dubbed it EchoLeak. Here is how it works: A threat actor sends a seemingly innocuous email message to the target, which contains a hidden prompt that instructs Copilot to exfiltrate sensitive data to an attacker-controlled server. Since Copilot is integrated into Microsoft 365, that data can include anything from intellectual property files, to business contracts and legal documents, or from internal communications, to financial data. The researchers note the prompt needs to be phrased like speaking to a human, so that it bypasses Microsoft's XPIA (cross-prompt injection attack) defenses. Later, when the victim interacts with Copilot and asks a business-related question, the LLM will pull all of the relevant data (including the attackers' email message) and will end up executing it. The files are stored in a crafted link or an image. The bug was assigned the CVE-2025-32711 identifier, and was given a severity score of 9.3/10 (critical). It was fixed server-side in May, meaning users don't need to do anything. Microsoft also said that there is no evidence that the flaw had been exploited in the past, and none of its customers were impacted. Microsoft 365 is one of the most popular cloud-based communications and online collaboration tools, combining office apps (Word, Excel, and others), cloud storage (OneDrive and SharePoint), email and calendar (Outlook, Exchange), and communications tools (Teams). Recently, Microsoft integrated its Generative AI model, Copilot, into Microsoft 365, allowing users to draft and summarize emails, generate and edit documents, create data visualizations and analyze trends, and more.
[5]
Aim Security details first known AI zero-click exploit targeting Microsoft 365 Copilot - SiliconANGLE
Aim Security details first known AI zero-click exploit targeting Microsoft 365 Copilot A new report out today from Aim Security Ltd. has revealed the first known zero-click artificial intelligence vulnerability that could have allowed attackers to exfiltrate sensitive internal data without any user interaction. The vulnerability, dubbed "EchoLeak," was found in Microsoft Corp.'s 365 Copilot generative AI tool in January and reported to Microsoft at the time. Aim has only come forward with the details now that the vulnerability has been addressed. The vulnerability involved what Aim describes as an "LLM Scope Violation," referring to scenarios where a large language model can be manipulated into leaking information beyond its intended context. In the case of the EchoLeak vulnerability, it involved crafting a malicious email containing specific markdown syntax that could slip past Microsoft's Cross-Prompt Injection Attack defenses. The markdown in the malicious email utilizes reference-style image and link formats to bypass Copilot's sanitization filters, ensuring the payload is preserved when the AI assistant retrieves and processes the email. From there, the exploit could then make use of Microsoft's own trusted domains, including SharePoint and Teams, which are whitelisted under Copilot's content security policies. The domains can be used to embed external links or images that, when rendered by Copilot, automatically issue outbound requests. The attackers can redirect the content to a server they control by crafting these references to include sensitive data retrieved from Copilot's context. Notably and critically, according to Aim's researchers, all of this happens behind the scenes. Users themselves don't have to open the email or click on anything with Copilot's automated processing being enough to trigger the entire chain, hence the zero-click designation for EchoLeak. Aim released a working proof-of-concept showing that data such as internal memos, strategic documents, or even personal identifiers could be leaked without any visual indication to the user or system administrators. Microsoft, in response, has acknowledged the issue but did note that it has found no evidence of the vulnerability being exploited in the wild. While it's positive that the vulnerability wasn't exploited in the wild, the fact that AI services can be vulnerable to zero-click attacks opens a Pandora's Box of future risk, be it that some cybersecurity experts are not surprised by the methodology's emergence. "If you didn't expect something like this to happen, you haven't been paying attention," Tim Erlin, security strategist at application programming interface security firm Wallarm Inc., told SiliconANGLE via email. "While the specific technique might not have been predictable, the idea that researchers wouldn't find some kind of meaningful, novel exploit for the ever-expanding AI attack surface is ridiculous," explains Erlin. "It was bound to happen. Microsoft and the researchers appear to have handled this one well, with responsible disclosure and a fix." Ensar Seker, chief information security officer at extended threat intelligence company SOCRadar Cyber Threat Intelligence Inc., warns that the disclosure has "serious implications for NATO, government, defense, healthcare and anyone using enterprise AI assistants: attackers no longer need to compromise user credentials or rely on phishing. They can manipulate a trusted AI interface directly." "What stands out especially is that this isn't limited to Copilot. As Aim Labs warns, any RAG-based agent that processes untrusted inputs alongside internal data is vulnerable to scope violations," added Seker. "This signals a broader architectural flaw across the AI assistant space - one that demands runtime guardrails, stricter input scoping and inflexible separation between trusted and untrusted content."
[6]
Researchers Just Found a Big Security Flaw in Microsoft's AI. Here's Why Businesses Should Worry
The type of security flaw found in Copilot is particularly dangerous because it means a user doesn't have to make a deliberate action to trigger the flaw to allow a hacker into a system. Most people are familiar with simpler types of hacks that rely on someone clicking on a phishing email, or following a link from a malicious text message. These are actions you can train workers to avoid. A zero-click hack doesn't need anyone to make a mistake to make their company computers vulnerable. The flaw was uncovered by researchers at Aim Security, a small Tel Aviv-based Ai security platform. It's especially dangerous because of how Copilot works: since it's built directly into useful widely used apps in the workplace, users trust the AI with access to their emails, documents and other information. In a corporate setting this could mean Copilot can "see" sensitive legal, financial or business planning documents. The flaw in Copilot that Aim uncovered would have allowed hackers to get past protections Microsoft has built into the AI, potentially allowing user information to leak out. And it's particularly devious, since it would rely on the intelligence of Copilot itself to dig through, say, a user's emails to identify and extract sensitive data. But before you panic, and block your staff from using some of the useful Copilot systems -- like helping them write complex business emails -- you should know that Microsoft has already been informed of the vulnerability, and has fixed it. "We have already updated our products to mitigate this issue and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture," the company said in a statement.
[7]
Microsoft 365 Copilot Could Be Hacked Without Any User Input: Research
This is said to be the first zero-click exploit on a major AI chatbot Microsoft 365 Copilot, the enterprise-focused artificial intelligence (AI) chatbot that works across Office apps, was reportedly vulnerable to a zero-click vulnerability. As per a cybersecurity firm, a flaw existed in the chatbot that could be triggered via a simple text email to hack into it. Once the chatbot was hacked, it could then be made to retrieve sensitive information from the user's device and share it with the attacker. Notably, the Redmond-based tech giant said that it has fixed the vulnerability, and that no users were affected by it. In a blog post, AI security startup Aim Security detailed the zero-click exploit and how the researchers were able to execute it. Notably, a zero-click attack refers to hacking attempts where the victim does not have to download a file or click on a URL for the attack to be triggered. A simple act such as opening an email can initiate the hacking attempt. The findings by the cybersecurity firm highlights the risks that AI chatbots pose, especially if they have agentic capability, which refers to the ability of an AI chatbot to access tools to execute actions. For example, Copilot being able to connect to OneDrive and retrieving data from a file stored there to answer a user query would be considered an agentic action. As per the researchers, the attack was initiated using cross-prompt injection attack (XPIA) classifiers. These is a form of prompt injection, where an attacker manipulates the input across multiple prompts, sessions, or messages to influence or control the behaviour of an AI system. The malicious message is often added via attached files, hidden or invisible text, or embedded instructions. The researchers shared the XPIA bypass via email. However, they also showed the same could be done via an image (embedding the malicious instruction in the alt text), and even via Microsoft Team by excuting a GET request for a malicious URL. While the first two methods still require the user to ask a query about the email or the image, the latter does not require users to take any particular action for the hacking attempt to begin. "The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context - and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations," the post added. Notably, a Microsoft spokesperson acknowledged the vulnerability and thanked Aim for identifying and reporting the issue, according to a Fortune report. The issue has now been fixed, and no users were affected by it, the spokesperson told the publication.
[8]
Hackers Could Steal Data From Microsoft 365 Copilot Without Phishing Or Malware, Says AI Startup -- 'EchoLeak' Flaw Took 5 Months To Fix - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
A critical security flaw was discovered in Microsoft MSFT 365 Copilot, an AI tool integrated into various Microsoft Office applications. This vulnerability could potentially lead to attacks on sensitive data. What Happened: The security flaw in Microsoft 365 Copilot was identified by AI security startup Aim Security. This flaw, named "EchoLeak," could be exploited by hackers to access sensitive information without the need for user interaction. With Microsoft 365 Copilot, an attacker could launch an attack merely by sending an email to a user -- no phishing or malware required. This could potentially expose confidential and proprietary data. Adir Gruss, co-founder and CTO of Aim Security, told Fortune that the EchoLeak flaw is not just a regular security bug. It can have broader implications that extend beyond Copilot, rooted in a fundamental design flaw inherent to LLM-based AI agents. Gruss stated that if he led a company working with AI agents, he would be 'terrified'. He also attributed security flaws as the reason behind fewer companies adopting AI agents. "They're just experimenting, and they're super afraid." Microsoft told Fortune that it has resolved the issue upon notification, and no customers were affected. However, Gruss said that the Satya Nadella-led company took five months to address the issue. Gruss stated that a lasting solution will necessitate a complete rethinking of how AI agents are designed. SEE ALSO: Mark Cuban Predicts AI Video Will Trigger 'An Explosion' In Face-To-Face Engagement, Events And Jobs. Calls It 'The Milli Vanilli Effect' Why It Matters: The discovery of this security flaw raises concerns about the potential risks associated with AI agents. This incident highlights the need for robust security measures to protect sensitive data from such vulnerabilities. Today's Best Finance Deals Earlier this year, Microsoft CEO Satya Nadella had introduced a new tool that allows anyone to create AI agents that can perform tasks on desktop and web applications. This recent security flaw in the Copilot AI tool underscores the importance of ensuring the security of AI agents. Meanwhile, tech giant Google GOOGL GOOG has been deploying its on-device AI model to detect and block fraudulent websites in real-time, significantly expanding its security capabilities. This proactive approach to AI security could serve as a model for other companies looking to enhance the security of their AI systems. According to Benzinga Edge Stock Rankings, Microsoft has a growth score of 90.75% and a quality rating of 63.33%. Click here to see how it compares to other leading tech companies. On a year-to-date basis, Microsoft stock surged 12.9%. READ MORE: Sundar Pichai Says He Gets Frustrated, Angry, But Realized Losing His Temper Doesn't Lead To Productivity Because Sometimes 'Silence Can Deliver' More: Here's What He Learned Because Of Soccer Image via Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. GOOGAlphabet Inc$176.75-1.14%Stock Score Locked: Want to See it? Benzinga Rankings give you vital metrics on any stock - anytime. Reveal Full ScoreEdge RankingsMomentum45.78Growth88.43Quality85.77Value51.30Price TrendShortMediumLongOverviewGOOGLAlphabet Inc$175.42-1.09%MSFTMicrosoft Corp$475.180.54%Market News and Data brought to you by Benzinga APIs
[9]
Hackers successfully attacked an AI agent, Microsoft fixed the flaw: Here's why it's scary
Fortune's report on EchoLeak reveals how Microsoft's Copilot could be tricked into exposing internal data. It didn't start with a ransom note, there were no system crashes, no screens held hostage. Just an AI assistant, Microsoft Copilot, doing exactly what it was designed to do: be helpful. And according to an exclusive report by Fortune, that's exactly what made it so terrifying. In May 2025, Microsoft quietly patched a critical vulnerability in Copilot, its flagship AI tool embedded across Windows, Office, Teams, and more. Labeled CVE-2025-32711, the fix addressed an issue that, in Microsoft's words, had not been exploited in the wild. Customers were "not affected." But, as the report by Fortune suggests, the vulnerability had a name, EchoLeak, and behind it, a sobering truth: hackers had figured out how to manipulate an AI assistant into leaking private data without ever breaching a system. No malware. No phishing. Just clever words. Also read: What is Gentle Singularity: Sam Altman's vision for the future of AI? Imagine whispering a question into a room where someone else is speaking. Now imagine the assistant in that room repeating their words back to you by accident. That's, according to the report, what EchoLeak is in essence. Microsoft Copilot draws from both public and private sources to generate context-aware answers. If you ask it, "What's the latest on Project Zephyr?" it might scan your company's internal documents, emails, and calendar invites to provide a tailored summary. That's the magic. But, as Fortune highlights, that's also the danger. Researchers discovered that by embedding certain cues into a document or webpage, a hacker could trick Copilot into treating external content as a request to surface internal data. The AI, oblivious to the intent, obliges, echoing out information that was never meant to leave the company walls. This wasn't theoretical. It worked. To Microsoft's credit, the response was swift. The vulnerability was patched server-side with no action needed from users. The company said it had seen no evidence of active exploitation, and it began implementing deeper security checks across the Copilot infrastructure. But the alarm, as the report implies, wasn't about what happened. It was about what could have. Security researchers, in the Fortune story, describe EchoLeak as the first clear instance of a "scope violation" in a live AI agent: a breakdown in how the AI distinguishes between trusted internal context and untrusted external input. Also read: Microsoft rolls out AI voice assistant for Windows 11 insiders: Here's how it works That's not a bug, experts in the report say that's a fundamental design flaw. We're teaching these systems to help us, but, as Fortune's findings suggest, we haven't taught them when to say no. EchoLeak shows how easily a helpful assistant becomes a liability. What's most unsettling about EchoLeak isn't the technical jargon, it's the everyday familiarity of the scenario. A junior employee opens a shared document. An executive glances at a browser window. A Teams meeting references a link pasted in chat. Copilot is running in the background, silently helpful. And then, without any malice or even awareness, it blurts out the wrong thing to the wrong person. There's no evil genius behind the keyboard. Just a clever prompt in the wrong place, and an AI that doesn't know any better. That's what makes this scary: there's no breach, no alert, no trace. Just a soft, almost invisible betrayal. Microsoft has spent years positioning Copilot as the future of work, an intelligent partner that can write emails, summarize meetings, generate code, and crunch data. It's a vision that's rapidly becoming reality. But EchoLeak, as detailed by Fortune, shows that trusting an AI with context is not the same as controlling it. The line between helpful and harmful isn't always drawn in code, it's drawn in judgment. And large language models, no matter how sophisticated, don't have that judgment. They don't know when they're crossing a line. They don't know what not to say. EchoLeak didn't break Microsoft. It didn't even shake the cloud, but it shook the foundations of how we think about AI in the workplace. This is a blind spot shared by every company racing to embed AI into their platforms. Microsoft just happened to be the first to look up and realize the wall had a crack. Only this time, the assistant didn't need to be hacked. It just needed to be asked the wrong question at the right time.
Share
Copy Link
Researchers uncover a critical zero-click AI vulnerability in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. The flaw, dubbed "EchoLeak," highlights new security risks in AI-integrated systems.
In a groundbreaking discovery, researchers at Aim Labs have uncovered the first known zero-click artificial intelligence (AI) vulnerability, dubbed "EchoLeak." This critical flaw, identified in January 2025, affects Microsoft 365 Copilot, an AI assistant integrated into various Office applications 1.
EchoLeak is classified as an "LLM Scope Violation," a new class of vulnerabilities that can cause large language models (LLMs) to leak privileged internal data without user intent or interaction 2. The attack exploits the Retrieval-Augmented Generation (RAG) engine used by Copilot, allowing attackers to exfiltrate sensitive information from a user's context silently.
Source: Bleeping Computer
The attack begins with a malicious email containing a hidden prompt injection, crafted to instruct the LLM to extract and exfiltrate sensitive internal data. This email, formatted to look like a typical business document, bypasses Microsoft's XPIA (cross-prompt injection attack) classifier protections 1.
When a user later interacts with Copilot, the RAG engine retrieves the malicious email due to its apparent relevance. The injected prompt then "tricks" the LLM into pulling sensitive data and inserting it into a crafted link or image 3.
Source: The Hacker News
Aim Labs discovered that certain markdown image formats cause the browser to automatically request the image, sending the URL (including embedded data) to the attacker's server. While Microsoft's Content Security Policy (CSP) blocks most external domains, Microsoft Teams and SharePoint URLs are trusted and can be abused to exfiltrate data without issue 1.
Microsoft assigned the vulnerability the identifier CVE-2025-32711, rating it critical with a CVSS score of 9.3 out of 10 4. The company addressed the issue server-side in May 2025, requiring no action from users. Microsoft stated that there is no evidence of real-world exploitation, and no customers were impacted 2.
The discovery of EchoLeak has significant implications for AI security, particularly for NATO, government, defense, healthcare, and enterprises using AI assistants. Ensar Seker, CISO at SOCRadar, warns that "attackers no longer need to compromise user credentials or rely on phishing. They can manipulate a trusted AI interface directly" 5.
Source: Benzinga
As AI integration deepens in business workflows, experts warn that traditional defenses may be overwhelmed. Tim Erlin, a security strategist at Wallarm, noted that such vulnerabilities were "bound to happen" given the expanding AI attack surface 5.
To mitigate similar risks, enterprises are advised to:
The EchoLeak vulnerability serves as a wake-up call for the AI industry, highlighting the need for robust security measures in AI-integrated systems. As AI assistants become more prevalent, addressing these vulnerabilities will be crucial to maintain trust and security in AI technologies.
Ilya Sutskever, co-founder of Safe Superintelligence (SSI), assumes the role of CEO following the departure of Daniel Gross to Meta. The move highlights the intensifying competition for top AI talent among tech giants.
6 Sources
Business and Economy
1 hr ago
6 Sources
Business and Economy
1 hr ago
Google's advanced AI video generation tool, Veo 3, is now available worldwide to Gemini app 'Pro' subscribers, including in India. The tool can create 8-second videos with audio, dialogue, and realistic lip-syncing.
7 Sources
Technology
17 hrs ago
7 Sources
Technology
17 hrs ago
A federal court has upheld an order requiring OpenAI to indefinitely retain all ChatGPT logs, including deleted chats, as part of a copyright infringement lawsuit by The New York Times and other news organizations. This decision raises significant privacy concerns and sets a precedent in AI-related litigation.
3 Sources
Policy and Regulation
9 hrs ago
3 Sources
Policy and Regulation
9 hrs ago
Microsoft's Xbox division faces massive layoffs and game cancellations amid record profits, with AI integration suspected as a key factor in the restructuring.
4 Sources
Business and Economy
9 hrs ago
4 Sources
Business and Economy
9 hrs ago
Google's AI video generation tool, Veo 3, has been linked to a surge of racist and antisemitic content on TikTok, raising concerns about AI safety and content moderation on social media platforms.
5 Sources
Technology
17 hrs ago
5 Sources
Technology
17 hrs ago