Microsoft Copilot Flaw Bypasses Privacy Filters to Read Confidential Emails

Microsoft Copilot Flaw Bypasses Privacy Filters to Read Confidential Emails

2026-02-19 companies

Redmond, Wednesday, 18 February 2026.
Since late January, a software bug allowed Microsoft’s AI to summarize confidential emails, bypassing critical data protection policies designed to secure sensitive corporate and personal information.

Breach of Data Loss Prevention Protocols

Microsoft (MSFT) has officially acknowledged a critical vulnerability within its Microsoft 365 Copilot ecosystem that compromised the integrity of sensitive corporate communications. On Wednesday, February 18, 2026, the tech giant confirmed that a bug allowed its AI-powered chat assistant to access and summarize emails marked as confidential, effectively bypassing the Data Loss Prevention (DLP) policies that enterprises rely upon to shield proprietary data [1][2]. This failure is significant because these DLP protocols are the primary defense mechanism preventing large language models from ingesting and processing restricted internal information, a key requirement for corporate adoption of AI tools [1].

Timeline and Technical Scope

The issue, identified by administrators under the tracking code CW1226324, has been active for nearly a month, having first been detected on January 21, 2026 [2]. According to technical reports, the flaw specifically impacts the Copilot “work tab” chat feature, granting the AI unintended access to scan and outline contents located in a user’s Sent Items and Drafts folders [2][3]. This occurred even when specific “confidential” labels were applied to the messages, a malfunction Microsoft attributes to a code error within the processing logic [2]. The breach affects the suite of Office software products where Copilot is integrated, including Word, Excel, and PowerPoint [1].

Operational Risks and Remediation

For the corporate sector, the implications of this breach extend beyond a mere software glitch. The confidential tagging feature in Outlook is routinely utilized to secure high-stakes documentation, including legal contracts, government investigation details, and personal medical records [3]. By failing to respect these tags, the system potentially exposed highly sensitive data to AI summarization, although Microsoft has stated that the “scope of impact may change” as investigations continue [3]. In response to the discovery, Microsoft began deploying a fix in early February 2026, though the company has not provided a definitive timeline for when the remediation will be universally effective across all tenancies [2][3]. As of February 12, the company reported it was still monitoring the deployment of the solution and contacting affected users directly [2]. Notably, Microsoft has declined to disclose the specific number of customers or organizations whose data was mishandled during the weeks the bug was active [1].

Compounding Security Challenges

This incident arrives amidst a broader reevaluation of AI security in sensitive environments. Earlier this week, the European Parliament’s IT department moved to block built-in AI features on work devices, citing preemptive concerns that such tools could upload confidential correspondence to the cloud [1]. Concurrently, cybersecurity researchers revealed on February 17 that AI assistants, including Copilot, could potentially be exploited as proxies for command-and-control (C2) malware operations [4]. This separate vulnerability allows threat actors to utilize AI agents to retrieve attacker-controlled URLs and facilitate bidirectional communication without requiring an API key, further complicating the security landscape for AI deployment [4].

Summary

As businesses increasingly integrate AI for productivity, the reliability of data governance frameworks remains a critical friction point. While the immediate software flaw regarding email summarization is currently being patched, the ability of AI agents to override established security protocols highlights the ongoing tension between operational efficiency and information security [GPT]. Microsoft continues to work on resolving the issue, but the breach serves as a stark reminder of the risks associated with automated data processing in enterprise environments [1].

Sources


data breach ai security