4 Sources
4 Sources
[1]
Microsoft says Office bug exposed customers' confidential emails to Copilot AI | TechCrunch
Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers' confidential emails for weeks without permission. The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft's large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint. Microsoft said the bug, trackable by admins as CW1226324, means that draft and sent email messages "with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat." The tech giant said it began rolling out a fix for the bug earlier in February. A spokesperson for Microsoft did not respond to a request for comment, including a question about how many customers are affected by the bug. Earlier this week, the European Parliament's IT department told lawmakers that it blocked the built-in AI features on their work-issued devices, citing concerns that the AI tools could upload potentially confidential correspondence to the cloud.
[2]
Microsoft says bug causes Copilot to summarize confidential emails
Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information. According to a service alert seen by BleepingComputer, this bug (tracked under CW1226324 and first detected on January 21) affects the Copilot "work tab" chat feature, which incorrectly reads and summarizes emails stored in users' Sent Items and Drafts folders, including messages that carry confidentiality labels explicitly designed to restrict access by automated tools. Copilot Chat (short for Microsoft 365 Copilot Chat) is the company's AI-powered, content-aware chat that lets users interact with AI agents. Microsoft began rolling out Copilot Chat to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft said when it confirmed this issue. "The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured." Microsoft has since confirmed that an unspecified code error is responsible and said it began rolling out a fix in early February. As of Wednesday, the company said it was continuing to monitor the deployment and is reaching out to a subset of affected users to verify that the fix is working. "A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place," Microsoft added. Microsoft has not provided a final timeline for full remediation and has not disclosed how many users or organizations were affected, saying only that the scope of impact may change as the investigation continues. However, this ongoing incident has been tagged as an advisory, a flag commonly used to describe service issues typically involving limited scope or impact.
[3]
Copilot bug allows 'AI' to read confidential Outlook emails
Microsoft is rolling out a fix, but the timeline remains unclear, raising significant concerns about AI reliability and data privacy protection. For all its supposed intelligence, "AI" seems to make a lot of stupid mistakes -- for example, scanning and summarizing emails marked "confidential" in Microsoft Outlook. That's the latest issue with Microsoft's Copilot assistant, according to a bug report from Microsoft itself. Copilot Chat in Microsoft 365 accounts is able to read and summarize emails in the Sent and Drafts folders of Outlook, even if they're marked confidential... a mark that's specifically designed to keep automated tools out. BleepingComputer summarizes the issue labeled "CW1226324" and says that a fix is being rolled out to affected accounts. There's no timeline for when the fix will be available for all users. (Unfortunately, the full report isn't available for viewing by the general public -- you need Microsoft 365 admin privileges just to see it.) The problem is, as you might guess, alarming. The confidential feature in Outlook is often used for things like business contracts, legal correspondence, government or police investigations, and personal medical information. It's the kind of stuff you absolutely do not want scanned by a large language model, and definitely not sucked up into its training data, as is so often the case. Microsoft isn't saying how many users are affected, but it is saying that "the scope of impact may change" as it investigates the problem. How comforting. That'll really get people to start using Copilot, right?
[4]
A Microsoft Copilot Bug Has Been Exposing Confidential Emails -- Are You Affected?
A bug has been causing Microsoft Copilot to read and summarize users' confidential emails. The issue has been ongoing since late January, Microsoft said, due to a bug that bypasses data loss prevention (DLP) policies meant to protect sensitive information. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," the company said, according to BleepingComputer. Copilot Chat is Microsoft's AI-driven chatbot that allows users to communicate with AI agents. The company launched the feature in September to Microsoft 365 business customers using Word, Excel, PowerPoint, Outlook, and OneNote. The bug targets the Copilot "work tab" feature, which unconcensually summarizes emails in users' Sent Items and Drafts folders. These folders are explicitly labeled confidential in order to prevent automated tools from accessing them, according to a service alert viewed by BleepingComputer.
Share
Share
Copy Link
Microsoft confirmed a bug allowed its Copilot AI to read and summarize confidential emails since late January, bypassing data loss prevention policies designed to protect sensitive information. The issue affects Microsoft 365 business customers using Copilot Chat, with the company now rolling out a fix but declining to specify how many users were impacted.
Microsoft has acknowledged that a Microsoft Copilot bug allowed its AI assistant to access confidential emails for several weeks, creating significant AI data privacy concerns for business customers. The issue, tracked as CW1226324 and first detected on January 21, affects the Copilot "work tab" chat feature within Microsoft 365
2
. The bug enabled the AI-driven chatbot to read and summarize confidential emails stored in users' Sent and Drafts folders, even when those messages carried confidentiality labels specifically designed to prevent automated tools from processing them1
.
Source: Inc.
The vulnerability fundamentally undermined data loss prevention policies that organizations depend on to safeguard sensitive information. Microsoft stated that "users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," confirming that the system was incorrectly processing messages despite DLP configurations
2
. This means the AI could summarize confidential emails containing business contracts, legal correspondence, government investigations, and personal medical informationβprecisely the type of content users mark as confidential to keep out of large language models and training data3
.According to Microsoft's service alert, a code error is responsible for allowing items in the Sent Items and Drafts folders to be processed by Copilot even when confidential labels are in place
2
. Copilot Chat, which Microsoft launched in September 2025 to paying Microsoft 365 business customers across Word, Excel, PowerPoint, Outlook, and OneNote, was designed to help users interact with AI agents while respecting organizational security boundaries4
. However, this bug demonstrates how the AI assistant failed to honor those boundaries, raising questions about the reliability of automated security controls when AI systems are involved.
Source: PCWorld
The company began rolling out a fix in early February and is continuing to monitor the deployment while reaching out to a subset of affected users to verify effectiveness
2
. Yet Microsoft has not provided a final timeline for full remediation, and critically, has declined to disclose how many users or organizations were affected by the breach1
. The company noted only that "the scope of impact may change" as investigations continue, a statement that offers little reassurance to businesses concerned about what sensitive data may have been exposed3
.Related Stories
This incident arrives at a sensitive moment for AI adoption in corporate environments. Earlier this week, the European Parliament's IT department blocked built-in AI features on lawmakers' work-issued devices, citing concerns that AI tools could upload potentially confidential correspondence to the cloud
1
. The timing underscores growing institutional wariness about integrating AI systems into workflows that handle sensitive information.For organizations evaluating whether to deploy Copilot across their workforce, this bug highlights the tension between AI convenience and data protection. Admins can track the issue using the CW1226324 identifier, but the incident raises fundamental questions about whether current AI safeguards are sufficient for regulated industries or organizations handling classified information
1
. Microsoft has tagged this as an advisory, typically indicating limited scope, but without transparency about affected user counts, businesses lack the information needed to assess their exposure2
. As enterprises watch for updates on the fix's completion, the incident serves as a reminder that AI systems require rigorous testing of security boundaries before deployment at scale.
Source: TechCrunch
Summarized by
Navi
[1]
[2]
12 Jun 2025β’Technology

16 Jan 2026β’Technology

27 Feb 2025β’Technology

1
Technology

2
Business and Economy

3
Technology
