Bug exposed confidential emails to Copilot AI, bypassing Microsoft's data protection policies

Reviewed byNidhi Govil

16 Sources

Share

Microsoft confirmed a coding bug allowed Copilot AI to access and summarize confidential emails for nearly a month, despite data loss prevention policies designed to shield sensitive information. The flaw, tracked as CW1226324, affected Microsoft 365 Copilot Chat users since January, raising concerns about AI tool security as organizations struggle to keep pace with rapid feature rollouts.

Copilot AI Bug Exposed Confidential Emails Despite Security Measures

Microsoft has acknowledged a significant bug that allowed its Copilot AI to access and summarize confidential emails without permission for nearly a month. The flaw, first reported by Bleeping Computer

1

, enabled Microsoft 365 Copilot Chat to read and outline email contents since January, even when customers had implemented data loss prevention policies specifically designed to prevent their sensitive information from being ingested into Microsoft's large language model

2

.

Source: CXOToday

Source: CXOToday

The issue, trackable by administrators as CW1226324, was first discovered on January 21, 2026, when customers reported that draft and sent email messages with a confidential label applied were being incorrectly processed by the AI tool

3

. Microsoft 365 Copilot Chat allows paying customers to use the AI-powered chat feature across Office software products, including Word, Excel, and PowerPoint

1

.

Data Loss Prevention Policies Failed to Protect Sensitive Information

The bug represents a serious breach of trust for organizations relying on DLP controls to safeguard their sensitive communications. Microsoft's own documentation explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information protection policies

3

. However, the code issue allowed items in the sent items and draft folders to be picked up by Copilot even though confidential labels were set in place, effectively bypassing data loss prevention policies that should have blocked access

2

.

Source: The Register

Source: The Register

The Work tab of Copilot Chat, which began rolling out to Microsoft 365 business users in September, was specifically affected by this vulnerability

2

. The flaw raised immediate data security concerns across multiple sectors, with reports suggesting the UK's National Health Service (NHS) was among the affected organizations

2

. The NHS confirmed that while draft or sent emails were processed, patient information was not exposed to AI tool

5

.

Microsoft Rolls Out Fix But Questions Remain

Microsoft began rolling out a fix for the bug earlier in February and stated it is monitoring the effectiveness of the remediation while reaching out to affected customers

1

. A Microsoft spokesperson told BBC News that the company "identified and addressed an issue" and emphasized that "our access controls and data protection policies remained intact," though acknowledged "this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access"

5

.

The tech giant clarified that the bug provided access to summarized information only to those who were already authorized to see it

4

. However, Microsoft did not disclose how many customers were affected by the data leakage incident

1

.

Broader Implications for AI Tool Adoption in Enterprise Environments

The incident highlights growing privacy concerns surrounding Microsoft's integration of AI tools into workplace environments. Earlier this week, the European Parliament's IT department blocked built-in AI features on lawmakers' work-issued devices, citing concerns that AI tools could upload potentially confidential correspondence to the cloud

1

. This sentiment reflects a broader trend, with 72 percent of S&P 500 companies citing AI as a material risk in regulatory filings

3

.

Source: Digit

Source: Digit

Nader Henein, data protection and AI governance analyst at Gartner, warned that "this sort of fumble is unavoidable" given the frequency of new AI capabilities being released. He noted that organizations using these AI products often lack tools needed to protect themselves and manage each new feature, adding that "the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible"

5

.

Microsoft Expands Data Controls Across All Storage Locations

In response to customer feedback and the recent security lapse, Microsoft announced it is expanding data controls to block Copilot from processing confidential Word, Excel, and PowerPoint documents regardless of their location

4

. Currently, Microsoft Purview DLP policies apply only to files stored in SharePoint or OneDrive, but not to those stored on local devices. This change will be deployed through the Augmentation Loop (AugLoop) Office component between late March and late April 2026

4

.

Cyber-security expert Professor Alan Woodward of the University of Surrey emphasized the importance of making such tools private-by-default and opt-in only. "There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen," he told BBC News

5

. Organizations deploying Microsoft 365 should monitor the rollout of enhanced DLP enforcement and consider whether their current sensitivity labels adequately protect critical business communications from unauthorized AI processing.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo