Microsoft Copilot bug exposed confidential emails to AI for weeks despite security labels

Reviewed byNidhi Govil

4 Sources

Share

Microsoft confirmed a bug allowed its Copilot AI to read and summarize confidential emails since late January, bypassing data loss prevention policies designed to protect sensitive information. The issue affects Microsoft 365 business customers using Copilot Chat, with the company now rolling out a fix but declining to specify how many users were impacted.

Microsoft Copilot Bug Bypassed Critical Security Protections

Microsoft has acknowledged that a Microsoft Copilot bug allowed its AI assistant to access confidential emails for several weeks, creating significant AI data privacy concerns for business customers. The issue, tracked as CW1226324 and first detected on January 21, affects the Copilot "work tab" chat feature within Microsoft 365

2

. The bug enabled the AI-driven chatbot to read and summarize confidential emails stored in users' Sent and Drafts folders, even when those messages carried confidentiality labels specifically designed to prevent automated tools from processing them

1

.

Source: Inc.

Source: Inc.

The vulnerability fundamentally undermined data loss prevention policies that organizations depend on to safeguard sensitive information. Microsoft stated that "users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," confirming that the system was incorrectly processing messages despite DLP configurations

2

. This means the AI could summarize confidential emails containing business contracts, legal correspondence, government investigations, and personal medical informationβ€”precisely the type of content users mark as confidential to keep out of large language models and training data

3

.

How the Code Error Bypasses Data Loss Prevention

According to Microsoft's service alert, a code error is responsible for allowing items in the Sent Items and Drafts folders to be processed by Copilot even when confidential labels are in place

2

. Copilot Chat, which Microsoft launched in September 2025 to paying Microsoft 365 business customers across Word, Excel, PowerPoint, Outlook, and OneNote, was designed to help users interact with AI agents while respecting organizational security boundaries

4

. However, this bug demonstrates how the AI assistant failed to honor those boundaries, raising questions about the reliability of automated security controls when AI systems are involved.

Source: PCWorld

Source: PCWorld

The company began rolling out a fix in early February and is continuing to monitor the deployment while reaching out to a subset of affected users to verify effectiveness

2

. Yet Microsoft has not provided a final timeline for full remediation, and critically, has declined to disclose how many users or organizations were affected by the breach

1

. The company noted only that "the scope of impact may change" as investigations continue, a statement that offers little reassurance to businesses concerned about what sensitive data may have been exposed

3

.

Broader Implications for AI Security in Enterprise Software

This incident arrives at a sensitive moment for AI adoption in corporate environments. Earlier this week, the European Parliament's IT department blocked built-in AI features on lawmakers' work-issued devices, citing concerns that AI tools could upload potentially confidential correspondence to the cloud

1

. The timing underscores growing institutional wariness about integrating AI systems into workflows that handle sensitive information.

For organizations evaluating whether to deploy Copilot across their workforce, this bug highlights the tension between AI convenience and data protection. Admins can track the issue using the CW1226324 identifier, but the incident raises fundamental questions about whether current AI safeguards are sufficient for regulated industries or organizations handling classified information

1

. Microsoft has tagged this as an advisory, typically indicating limited scope, but without transparency about affected user counts, businesses lack the information needed to assess their exposure

2

. As enterprises watch for updates on the fix's completion, the incident serves as a reminder that AI systems require rigorous testing of security boundaries before deployment at scale.

Source: TechCrunch

Source: TechCrunch

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo