16 Sources
16 Sources
[1]
Microsoft says Office bug exposed customers' confidential emails to Copilot AI | TechCrunch
Microsoft has confirmed that a bug allowed its Copilot AI to summarize customers' confidential emails for weeks without permission. The bug, first reported by Bleeping Computer, allowed Copilot Chat to read and outline the contents of emails since January, even if customers had data loss prevention policies to prevent ingesting their sensitive information into Microsoft's large language model. Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in its Office software products, including Word, Excel, and PowerPoint. Microsoft said the bug, trackable by admins as CW1226324, means that draft and sent email messages "with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat." The tech giant said it began rolling out a fix for the bug earlier in February. A spokesperson for Microsoft did not respond to a request for comment, including a question about how many customers are affected by the bug. Earlier this week, the European Parliament's IT department told lawmakers that it blocked the built-in AI features on their work-issued devices, citing concerns that the AI tools could upload potentially confidential correspondence to the cloud.
[2]
Microsoft Bug Let Copilot Access Confidential Emails Without Consent
Microsoft has admitted that a coding bug accidentally allowed Copilot Chat to access and summarize confidential emails. As Bleeping Computer reports, the flaw bypasses data loss prevention (DLP) policies enabled by customers who wish to keep their data shielded from Microsoft's AI. The issue, first reported on Jan. 21, affects the Work tab of Copilot Chat, a feature that began rolling out to Microsoft 365 business users via Word, Excel, PowerPoint, Outlook, and OneNote in September. Microsoft has traced the issue to a coding bug in Copilot. "A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place," the company tells BleepingComputer. Users can mark their files and emails as sensitive, or let Microsoft 365 do it automatically. Once the label is applied, Microsoft is supposed to keep the data "compliant with your organization's information protection policies." A fix for the ongoing issue began rolling out earlier this month, though the company hasn't clarified when it will be fully resolved. It is still monitoring the fix and reaching out to affected users to check if it is working. The number of organizations affected by the bug is unclear, but it appears the UK's National Health Service (NHS) is among them. Microsoft's integration of AI features into its products has been anything but smooth. Features like Windows Recall and Copilot Vision have raised privacy concerns, and the company is also reportedly planning to scale back Copilot across Windows 11 apps.
[3]
Copilot Chat bug bypasses DLP on 'Confidential' email
The bot couldn't keep its prying eyes away. Microsoft 365 Copilot Chat has been summarizing emails labeled "confidential" even when data loss prevention policies were configured to prevent it. Though there are data sensitivity labels and data loss prevention policies in place for email, Copilot has been ignoring those and talking about secret stuff in the Copilot Chat tab. It's just this sort of scenario that has led 72 percent of S&P 500 companies to cite AI as a material risk in regulatory filings. Redmond, earlier this month, acknowledged the problem in a notice to Office admins that's tracked as CW1226324, as reposted by the UK's National Health Service support portal. Customers are said to have reported the problem on January 21, 2026. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," the notice says. "The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured." Microsoft explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information security policies. These labels may function differently in different applications, the company says. The software giant's documentation makes clear that these labels do not function in a consistent way. "Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios," the documentation explains. "For example, in Teams, and in Microsoft 365 Copilot Chat." DLP, implemented through applications like Microsoft Purview, is supposed to provide policy support to prevent data loss. "DLP monitors and protects against oversharing in enterprise apps and on devices," Microsoft explains. "It targets Microsoft 365 locations, like Exchange and SharePoint, and locations you add, like on-premises file shares, endpoint devices, and non-Microsoft cloud apps." In theory, DLP policies should be able to affect Microsoft 365 Copilot and Copilot Chat. But that hasn't been happening in this instance. The root cause is said to be "a code issue [that] is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place." Microsoft did not immediately respond to a request for comment. The notice says the company is in the process of remediating the issue and is contacting affected customers to check on the effectiveness of the fix. A remediation timeline is planned at some point. ®
[4]
Microsoft adds Copilot data controls to all storage locations
Microsoft is expanding data loss prevention (DLP) controls to block the Microsoft 365 Copilot AI assistant from processing confidential Word, Excel, and PowerPoint documents, regardless of their location. Currently, Microsoft Purview DLP policies apply only to files stored in SharePoint or OneDrive, but not to those stored on local devices. This change will be deployed through the Augmentation Loop (AugLoop) Office component between late March and late April 2026 to ensure that DLP controls apply to all Office documents, whether they are stored locally, in SharePoint, or OneDrive. "This enhancement responds to customer feedback requesting more consistent protection coverage across local and cloud-based file locations," Microsoft said in a message center update. Once the change is deployed, Copilot will not be able to read or process Word, Excel, or PowerPoint documents that are labeled as restricted by DLP controls. Microsoft also stated that the changes will be automatically enabled for organizations with DLP policies configured to block Copilot from processing sensitivity-labeled content, without requiring any administrative action or changes. "This update does not modify Copilot capabilities. Instead, Office clients and AugLoop have been enhanced so AugLoop can read a file's sensitivity label directly from the client," Microsoft added. "Today, AugLoop retrieves the label by calling Microsoft Graph using the file's SharePoint or OneDrive URL, which limits DLP enforcement to files stored in OneDrive and SharePoint. By enabling the client to provide the label, DLP enforcement now applies uniformly across all storage locations, including local files." This comes on the heels of a software bug (described by Microsoft as a "code issue") that allowed Microsoft 365 Copilot Chat (the company's AI-powered, content-aware chat that lets users interact with AI agents) to read and summarize confidential emails in users' Sent Items and Drafts folders for nearly a month despite the emails being protected by active data loss prevention policies and labeled as confidential. The bug, which was first discovered on January 21, affected the Copilot "work tab" chat functionality, which mistakenly accessed and summarized emails stored in users' Sent Items and Drafts folders, including those labeled confidential and intended to be protected from automated tools by explicit confidentiality labels. In a statement to BleepingComputer, Microsoft explained that the bug provided access to the summarized information only to those who were already authorized to see it, but that the "behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access."
[5]
Microsoft Copilot Chat error sees confidential emails exposed to AI tool
Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users' confidential emails by mistake. The tech giant has pushed Microsoft 365 Copilot Chat as a secure way for workplaces and their staff to use its generative AI chatbot. But it said a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders - including those marked as confidential. Microsoft says it has rolled out an update to fix the issue, and that it "did not provide anyone access to information they weren't already authorised to see". However, some experts warned the speed at which companies compete to add new AI features meant these kinds of mistakes were inevitable. Copilot Chat can be used within Microsoft programs such as Outlook and Teams, used for emails and chat functions, to get answers to questions or summarise messages. "We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop," a Microsoft spokesperson told BBC News. "While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access," they added. "A configuration update has been deployed worldwide for enterprise customers." The blunder was first reported by tech news outlet Bleeping Computer, which said it had seen a service alert confirming the issue. It cited a Microsoft notice saying "users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat". The notice added that a work tab within Copilot Chat had summarised email messages stored in a user's drafts and sent folders, even when they had a sensitivity label and a data loss prevention policy configured to prevent unauthorised data sharing. Reports suggest Microsoft first became aware of the error in January. Its notice about the bug was also shared on a support dashboard for NHS workers in England - where the root cause is attributed to a "code issue". A section of the notice on the NHS IT support site implies it has been affected. But it told BBC News the contents of any draft or sent emails processed by Copilot Chat would remain with their creators, and patient information has not been exposed. Enterprise AI tools such as Microsoft 365 Copilot Chat - available to organisations with a Microsoft 365 subscription - often have stricter controls and security protections in place to prevent sharing of sensitive corporate information. But for some experts, the issue still highlights risks of adopting generative AI tools in certain work environments. Nader Henein, data protection and AI governance analyst at Gartner, said "this sort of fumble is unavoidable", given the frequency of "new and novel AI capabilities" being released. He told BBC News organisations using these AI products often lack tools needed to protect themselves and manage each new feature. "Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up," Henein said. "Unfortunately the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible," he added. Cyber-security expert Professor Alan Woodward of the University of Surrey said it showed the importance of making such tools private-by-default and opt-in only. "There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen," he told BBC News. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
[6]
Microsoft says bug causes Copilot to summarize confidential emails
Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information. According to a service alert seen by BleepingComputer, this bug (tracked under CW1226324 and first detected on January 21) affects the Copilot "work tab" chat feature, which incorrectly reads and summarizes emails stored in users' Sent Items and Drafts folders, including messages that carry confidentiality labels explicitly designed to restrict access by automated tools. Copilot Chat (short for Microsoft 365 Copilot Chat) is the company's AI-powered, content-aware chat that lets users interact with AI agents. Microsoft began rolling out Copilot Chat to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft said when it confirmed this issue. "The Microsoft 365 Copilot 'work tab' Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured." Microsoft has since confirmed that an unspecified code error is responsible and said it began rolling out a fix in early February. As of Wednesday, the company said it was continuing to monitor the deployment and is reaching out to a subset of affected users to verify that the fix is working. "A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place," Microsoft added. Microsoft has not provided a final timeline for full remediation and has not disclosed how many users or organizations were affected, saying only that the scope of impact may change as the investigation continues. However, this ongoing incident has been tagged as an advisory, a flag commonly used to describe service issues typically involving limited scope or impact.
[7]
Copilot bug allows 'AI' to read confidential Outlook emails
Microsoft is rolling out a fix, but the timeline remains unclear, raising significant concerns about AI reliability and data privacy protection. For all its supposed intelligence, "AI" seems to make a lot of stupid mistakes -- for example, scanning and summarizing emails marked "confidential" in Microsoft Outlook. That's the latest issue with Microsoft's Copilot assistant, according to a bug report from Microsoft itself. Copilot Chat in Microsoft 365 accounts is able to read and summarize emails in the Sent and Drafts folders of Outlook, even if they're marked confidential... a mark that's specifically designed to keep automated tools out. BleepingComputer summarizes the issue labeled "CW1226324" and says that a fix is being rolled out to affected accounts. There's no timeline for when the fix will be available for all users. (Unfortunately, the full report isn't available for viewing by the general public -- you need Microsoft 365 admin privileges just to see it.) The problem is, as you might guess, alarming. The confidential feature in Outlook is often used for things like business contracts, legal correspondence, government or police investigations, and personal medical information. It's the kind of stuff you absolutely do not want scanned by a large language model, and definitely not sucked up into its training data, as is so often the case. Microsoft isn't saying how many users are affected, but it is saying that "the scope of impact may change" as it investigates the problem. How comforting. That'll really get people to start using Copilot, right?
[8]
Microsoft confirms Copilot bug let its AI read sensitive and confidential emails
Microsoft confirmed that a Copilot security bug was allowing the AI assistant to read and summarize emails that were labeled as confidential. According to a report from Bleeping Computer, the bug bypassed Microsoft's data loss prevention policies, which are meant to protect sensitive information. The bug was discovered in late January (tracked as CW1226324) and specifically affects Copilot Chat and the "work tab" feature. The bug let Copilot read and summarize emails in the sent and drafts folders, including messages that were explicitly labeled as confidential, which should have had restricted access. Copilot Chat is Microsoft's version of Google Gemini or ChatGPT. It's meant to be content-aware and can interact with 365 apps like Word, Excel, Powerpoint and Outlook. The company began rolling it out to Microsoft 365 business customers in September 2025. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft confirmed. The company said that an unspecified code error was responsible for the issue. A fix began rolling out in early February with Microsoft saying that it is continuing to monitor it. A final timeline for the rollout has not been revealed, nor has Microsoft stated how many organizations or individuals were affected. That said, the issue has been tagged as "advisory," which usually means that the incident was limited in scope or impact. "We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren't already authorized to see," a spokesperson told Bleeping Computer. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[9]
Microsoft admits an Office bug exposed confidential user emails to Copilot
* Copilot Chat was reading Sent and Draft emails, but the Inbox folder appears to have been protected * The bug (CW1226324) was identified in January, a fix followed in February * Though the fix is rolling out, this is still an ongoing issue Microsoft has confirmed that a bug in M365 Copilot Chat allowed the AI chatbot to summarise confidential emails without users' permission, bypassing data loss prevention (DLP) policies and sensitivity/confidentiality labels designed to block Copilot from accessing the emails in the first place. Though inboxes were unaffected, Copilot Chat was getting access to Sent and Draft folders, and presumably entire threads within those, which also include incoming emails. Tracked internally as CW1226324, the bug was first identified on January 21, 2026, but the company has already deployed a fix and continues to monitor the situation. M365 Copilot Chat was reading your sensitive emails "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," the advisory reads. Microsoft says an error code caused the issue, which allowed those labelled emails to be picked up: "A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place." The company started rolling out a fix in early February which it continues to monitor, but it seems that this is an ongoing issue that has not yet been fully resolved. Microsoft is also believed to be contacting affecting users as the patch rolls out as it continues to verify the fix. The timing of Microsoft's blunder is also very unfortunate, with the European Parliament recently banning all AI tools on worker devices on the basis that many systems were sharing data with the cloud, even though they could in theory be processing it locally. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
[10]
Microsoft Copilot read confidential emails without permission
A bug in Microsoft 365 and Copilot has been causing the AI assistant to summarize emails that were explicitly labeled as confidential, according to a report from Bleeping Computer. The Copilot security bug reportedly bypassed organizations' data loss prevention (DLP) policies, which are used to protect sensitive information. The bug specifically affected Copilot Chat. According to Microsoft's documentation, it caused emails with a confidential label to be "incorrectly processed by Microsoft 365 Copilot chat." For context, Copilot Chat, which rolled out to Microsoft 365 apps like Word, Excel, Outlook, and PowerPoint for enterprise customers last fall, is pitched as a content-aware AI assistant. Tech companies like Microsoft are integrating AI assistants into virtually all of their products, creating new types of cybersecurity risks in the process. Businesses using AI assistants could be at risk from prompt injection and data compliance violations, for instance. The Copilot Chat issue, tracked internally as CW1226324, was first detected on Jan. 21 and impacts Copilot's "work tab" chat feature, Bleeping Computer reports. Per Microsoft's advisory, Copilot Chat was incorrectly pulling in and summarizing emails from users' Sent Items and Drafts folders -- even when those messages had sensitivity labels designed to block automated access. In other words, emails and sensitive information that were supposed to be off-limits weren't. Microsoft confirmed to Bleeping Computer that a code issue was responsible and said it began rolling out a fix in early February. The company is continuing to monitor deployment and reach out to some affected users to verify the patch is working. Additionally, Microsoft hasn't disclosed how many organizations were impacted, noting that the scope may change as the investigation continues.
[11]
Microsoft 365 Copilot summarized confidential emails, DLP controls bypassed
Microsoft has confirmed that 365 Copilot allowed the AI assistant to summarize email content that should have been excluded by sensitivity labels and Data Loss Prevention controls. The problem is tracked as CW1226324 and was initially raised through customer complaints starting on January 21, 2026. According to the reports, Copilot's email summarization logic could pick up messages marked "Confidential" or labeled "Private," even in environments where administrators expected policy enforcement to block AI processing of sensitive mail. The scope described so far points mainly to Outlook content in the Sent Items and Drafts folders. That detail matters, because these two locations often contain the kind of material organizations most want to keep tightly controlled: outbound communications that include sensitive negotiations, pricing, legal language, or security details, as well as drafts that may contain incomplete or unapproved information. If Copilot can summarize those items despite being labeled for restricted handling, the result is a practical confidentiality risk, even if the original emails never appear directly in another user's mailbox. Microsoft's response is that this was not intended behavior and was caused by a code defect rather than a deliberate policy choice. In other words, the issue appears to be an enforcement gap in the path Copilot uses to gather context for summaries, where sensitivity labeling and DLP restrictions were not applied as expected. That distinction is important, but it doesn't eliminate the operational concern for admins: governance only works when every access and retrieval pipeline applies the same rules consistently. AI features add new paths for content retrieval and transformation, and those paths need the same level of policy coverage as traditional email sharing, forwarding, or search. Microsoft says it is rolling out a fix and monitoring remediation as it deploys across affected tenants. The company has not publicly provided a complete breakdown of how many organizations were impacted, but the incident has already fueled broader questions about how well AI assistants respect enterprise controls under edge cases. For organizations running Copilot, the practical next steps are predictable: validate enforcement rather than assume it. Test sensitivity labels and DLP rules against Copilot-specific actions such as summarization and "work content" chat queries, and treat Sent Items and Drafts as high-risk data sources during policy verification. Even if the bug is resolved quickly, the broader takeaway is that AI features are not just another UI layer. They are content transformation systems that can expose sensitive details through summaries, snippets, and generated text. That makes end-to-end policy enforcement and auditing essential, especially in regulated environments.
[12]
Microsoft bug allowed Copilot to summarize confidential emails
Microsoft confirmed a bug allowed its Copilot AI to summarize customers' confidential emails without permission. The issue, tracked as CW1226324, has been active since January. It bypassed data loss prevention policies designed to protect sensitive information from ingestion into Microsoft's large language model. The bug specifically affected Copilot Chat's ability to read and outline draft and sent emails marked with confidential labels within Microsoft 365. Microsoft began rolling out a fix in February. The vulnerability was first reported by Bleeping Computer. Copilot Chat enables paying Microsoft 365 customers to use an AI-powered chat feature across Office software products, including Word, Excel, and PowerPoint. Microsoft stated that draft and sent email messages "with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat." A spokesperson for Microsoft did not respond to a request for comment. This request included a question regarding the number of customers affected by the bug. Microsoft did not disclose the specific count of impacted customers.
[13]
A Microsoft Copilot Bug Has Been Exposing Confidential Emails -- Are You Affected?
A bug has been causing Microsoft Copilot to read and summarize users' confidential emails. The issue has been ongoing since late January, Microsoft said, due to a bug that bypasses data loss prevention (DLP) policies meant to protect sensitive information. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," the company said, according to BleepingComputer. Copilot Chat is Microsoft's AI-driven chatbot that allows users to communicate with AI agents. The company launched the feature in September to Microsoft 365 business customers using Word, Excel, PowerPoint, Outlook, and OneNote. The bug targets the Copilot "work tab" feature, which unconcensually summarizes emails in users' Sent Items and Drafts folders. These folders are explicitly labeled confidential in order to prevent automated tools from accessing them, according to a service alert viewed by BleepingComputer.
[14]
Microsoft Bug Allowed Copilot to Access Confidential Emails: Report
* Microsoft bug exposed confidential emails to Copilot AI * The bug bypassed Data Loss Prevention (DLP) policies * Microsoft reportedly acknowledged the issue Microsoft added Copilot Chat across its Word, Excel, PowerPoint, and Outlook platforms last year. Now, a recently discovered bug appears to have caused trouble. A Microsoft 365 Copilot bug reportedly allowed the AI assistant to summarise confidential emails for several weeks without permission. This move bypasses Data Loss Prevention (DLP) policies that organisations use to protect sensitive data. Microsoft said to have issued a fix for the bug, but the situation has raised serious privacy concerns. Outlook's confidential labelling is generally used for communicating sensitive data. How Microsoft's Copilot AI Accessed Confidential Outlook Emails The company confirmed to Bleeping Computer that a Microsoft 365 Copilot bug allowed the AI assistant to summarise confidential emails since late January. The bug, identified as CW1226324, was first detected on January 21, and it reportedly affects the Copilot "work tab" chat feature. The flaw causes Copilot to read and summarise emails stored in users' Sent Items and Drafts folders, including messages protected with confidentiality labels designed to restrict access by automated processing. The bug reportedly bypassed Data Loss Prevention (DLP) policies, which organisations rely on to safeguard sensitive information. Microsoft reportedly acknowledged the issue, saying, "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat". The company reportedly said that the 'work tab Chat' feature was summarising such emails despite sensitivity labels and DLP policies being in place. The company reportedly attributed the issue to an unspecified code error and said it began rolling out a fix in early February. The report further states that as of Wednesday, the company said it continues to monitor the deployment and is contacting a subset of affected users to confirm the fix is functioning. Microsoft has not disclosed how many users are affected by this bug. The latest bug comes as Microsoft expands to AI capabilities across Outlook, Word, Excel, and PowerPoint. The company launched new AI-powered shopping features for Copilot in the Edge browser late last year.
[15]
Copilot AI Reads User's Confidential Emails via MS Office: When EU Fears Came True
Microsoft's Copilot bug comes right at the moment when EU Parliament's tech staff blocked all AI features in their hardware A bug in Microsoft 365 Copilot has resulted in the AI assistant getting to summarise confidential emails on the user's system. This began in late January when the system bypassed data loss prevention policies that enterprises rely on to protect sensitive information, the company has confirmed. A service alert spotted by Bleeping Computer said the bug (CW1226324) was detected first on January 21 and seen to affect the Copilot "work tab" chat feature that goes on to read and summarize emails stored in the user's Sent items and in Draft folders. It appears that these included messages that carried explicit "Confidential" labels in order to restrict access. That the issue should come up barely 24 hours after reports of the European Parliament's IT department blocking the built-in features in their computer hardware at the workplace, is indeed tough luck for Microsoft. They had blocked all AI tools over fears that they could potentially upload confidential correspondence to the cloud. A report published by Politico says said the IT department of the European Parliament emailed that they could not guarantee the security of the data uploaded to the servers of AI companies and the full extent of what data is shared with them is still being assessed. "Some of these features use cloud services to carry out tasks that could be handled locally, sending data off the device," the IT support desk said in the email. "As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled." What exactly went wrong and how did the bug mess up Copilot? Copilot Chat, Microsoft's AI-powered chat that lets users interact with AI agents, was rolled out to the entire Office 365 suite including Word, Excel, PowerPoint, Outlook and OneNote last September. The concern is that Copilot Chat was reading and summarising these mails despite customers having data loss prevention policies to prevent ingesting their sensitive information into Microsoft's large language model. Microsoft said the bug resulted in Copilot incorrectly tracking draft and sent email messages "with a confidential label applied" to them. However, the company said a fix for the bug was rolled out earlier this month. Of course, the company hasn't provided information about how many customers were affected by the bug. According to Bleeping Computer, the company was continuing to monitor the rollout and is also reaching out to some of the affected users to verify that the fix is working. "A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place," Microsoft said. Till this point, the company has not indicated a timeline before the issue gets fully fixed. It also refrained from giving information about how many companies were affected by the bug, claiming that the scope of the impact could change as Microsoft investigates further. Security experts have for long harped upon the value of drawing up data access guardrails within enterprises when it comes to providing access to AI agents. In recent times, the European Union has led the way by bolstering data security policies, especially those sold by foreign technology vendors. In fact some members of the European Parliament had sought to remove Microsoft software as a preferred solution. In fact, they had suggested finding an European alternative. "With its thousands of employees and vast resources, the European Parliament is best positioned to galvanise the push for tech sovereignty," the lawmakers said in a letter to Parliament. "When even old friends can turn into foes and their companies into a political tool, we cannot afford this level of dependence on foreign tech, let alone continue funneling billions of taxpayers' money abroad," says the letter drafted by 38 members across parties at the European Parliament last November. This isn't the first time EU has spoken out against technology hegemony by a handful of US tech giants led by Microsoft, Google, Dell, HP and others. Back in 2023, they also banned use of social media app TikTok on staff devices and recommended to members that they delete the app from their phones. Of Trump, Big Tech, Subpoenas and falling trust in Europe The latest move by the Parliament in Brussels relates to switching off all AI tools amidst concerns that their built-in features such as writing and summarising assistants, and enhanced virtual assistants and webpage summarisers in tablets and phones. Uploading data to AI chatbots like Claude, Copilot and ChatGPT means that US authorities can demand the companies turn over information about their users. It is also worth mentioning here that AI chatbots rely on using data from users, provided as part of their routine functioning or uploaded to their own safe cloud space. This is critical for AI models to upgrade themselves, which means that they automatically enhance the potential for mischief by sharing data of one person to another. Or worse still to non-state actors or even those who have no business reading one's emails. Given the cold vibes that President Trump shares with most of Europe, it isn't surprising that critics came out strongly against the European Commission, the body that oversees the 27-member states. EC brought new legislative proposals that relaxed its data protection rules that could potentially make it easier for Big Tech to train their AI models. The European Parliament's move to curtail all AI tools appears justified in the light of what Microsoft has shared about the bug that read user emails in spite of having the right levels of security within the system. This could potentially result in lawmakers reviving their attempts to reevaluate their association with US tech giants. More so, since in the recent weeks the US Department of Homeland Security has shot off several hundred subpoenas to US tech and social media giants asking for information about people who have been publicly critical of the Trump administration. In fact, Google and Meta's compliance of such requests resulted in employees of these companies joining hands to question such policies and asking their top brass to refrain from sucking up.
[16]
Copilot AI was reading your private emails, confirms Microsoft: Are you safe?
The company started rolling out a fix earlier in February to address the issue. Microsoft has acknowledged an issue in its Copilot AI service that allowed the system to process and summarise users' confidential emails without proper permission. The problem lasted for several weeks and raised concerns about how sensitive information was handled by the company's AI tools. Copilot Chat had been able to access and generate summaries of emails as far back as January, reports TechCrunch. This happened even in cases where organisations had set up strict data loss prevention policies to stop sensitive content from being shared with Microsoft's large language models. Copilot Chat enables Microsoft 365 subscribers to access AI-powered chat capabilities within Office applications such as Word, Excel, and PowerPoint. The feature is meant to help users quickly analyse information, draft content, and summarise documents or conversations. However, due to the bug, it also processed emails marked as confidential. Also read: Samsung Galaxy S26 Ultra, Galaxy S26 Plus, Galaxy S26 pre-reservations benefit revealed: Launch date, specs, price and more Microsoft said the bug, trackable by admins as CW1226324, means that draft and sent email messages 'with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat.' In simple terms, emails that should have remained restricted were mistakenly treated as content that Copilot could read and summarise. The company stated that it started rolling out a fix earlier in February to address the issue, though it has not shared details about how many customers may have been affected. Also read: Nothing Phone 4a Pro and 4a India launch date announced: Check expected specs and price The incident has added to growing concerns worldwide about how AI systems handle sensitive workplace data. Earlier this week, the IT department of the European Parliament reportedly blocked built-in AI features on official staff devices. The move was taken as a precaution amid fears that such tools might upload confidential communications to cloud-based systems.
Share
Share
Copy Link
Microsoft confirmed a coding bug allowed Copilot AI to access and summarize confidential emails for nearly a month, despite data loss prevention policies designed to shield sensitive information. The flaw, tracked as CW1226324, affected Microsoft 365 Copilot Chat users since January, raising concerns about AI tool security as organizations struggle to keep pace with rapid feature rollouts.
Microsoft has acknowledged a significant bug that allowed its Copilot AI to access and summarize confidential emails without permission for nearly a month. The flaw, first reported by Bleeping Computer
1
, enabled Microsoft 365 Copilot Chat to read and outline email contents since January, even when customers had implemented data loss prevention policies specifically designed to prevent their sensitive information from being ingested into Microsoft's large language model2
.
Source: CXOToday
The issue, trackable by administrators as CW1226324, was first discovered on January 21, 2026, when customers reported that draft and sent email messages with a confidential label applied were being incorrectly processed by the AI tool
3
. Microsoft 365 Copilot Chat allows paying customers to use the AI-powered chat feature across Office software products, including Word, Excel, and PowerPoint1
.The bug represents a serious breach of trust for organizations relying on DLP controls to safeguard their sensitive communications. Microsoft's own documentation explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information protection policies
3
. However, the code issue allowed items in the sent items and draft folders to be picked up by Copilot even though confidential labels were set in place, effectively bypassing data loss prevention policies that should have blocked access2
.
Source: The Register
The Work tab of Copilot Chat, which began rolling out to Microsoft 365 business users in September, was specifically affected by this vulnerability
2
. The flaw raised immediate data security concerns across multiple sectors, with reports suggesting the UK's National Health Service (NHS) was among the affected organizations2
. The NHS confirmed that while draft or sent emails were processed, patient information was not exposed to AI tool5
.Microsoft began rolling out a fix for the bug earlier in February and stated it is monitoring the effectiveness of the remediation while reaching out to affected customers
1
. A Microsoft spokesperson told BBC News that the company "identified and addressed an issue" and emphasized that "our access controls and data protection policies remained intact," though acknowledged "this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access"5
.The tech giant clarified that the bug provided access to summarized information only to those who were already authorized to see it
4
. However, Microsoft did not disclose how many customers were affected by the data leakage incident1
.Related Stories
The incident highlights growing privacy concerns surrounding Microsoft's integration of AI tools into workplace environments. Earlier this week, the European Parliament's IT department blocked built-in AI features on lawmakers' work-issued devices, citing concerns that AI tools could upload potentially confidential correspondence to the cloud
1
. This sentiment reflects a broader trend, with 72 percent of S&P 500 companies citing AI as a material risk in regulatory filings3
.
Source: Digit
Nader Henein, data protection and AI governance analyst at Gartner, warned that "this sort of fumble is unavoidable" given the frequency of new AI capabilities being released. He noted that organizations using these AI products often lack tools needed to protect themselves and manage each new feature, adding that "the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible"
5
.In response to customer feedback and the recent security lapse, Microsoft announced it is expanding data controls to block Copilot from processing confidential Word, Excel, and PowerPoint documents regardless of their location
4
. Currently, Microsoft Purview DLP policies apply only to files stored in SharePoint or OneDrive, but not to those stored on local devices. This change will be deployed through the Augmentation Loop (AugLoop) Office component between late March and late April 20264
.Cyber-security expert Professor Alan Woodward of the University of Surrey emphasized the importance of making such tools private-by-default and opt-in only. "There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen," he told BBC News
5
. Organizations deploying Microsoft 365 should monitor the rollout of enhanced DLP enforcement and consider whether their current sensitivity labels adequately protect critical business communications from unauthorized AI processing.Summarized by
Navi
[1]
[3]
[4]
12 Jun 2025•Technology

29 Oct 2025•Technology

10 Oct 2025•Technology
