Copilot Bypasses Labels Twice, Evades DLP
💼#sensitivity-labels#dlp-bypass#retrieval-pipelineFreshcollected in 2h

Copilot Bypasses Labels Twice, Evades DLP

PostLinkedIn
💼Read original on VentureBeat

💡Copilot flaws leaked sensitive data past all DLP—critical alert for enterprise AI security.

⚡ 30-Second TL;DR

What changed

Four-week Jan bug (CW1226324) let Copilot process Sent Items/Drafts despite labels

Why it matters

Enterprises risk undetected leaks of sensitive data in AI assistants, especially in healthcare. Exposes gaps in legacy security for LLM pipelines, prompting need for AI-specific monitoring.

What to do next

Test Copilot against Microsoft 365 sensitivity-labeled emails and enable advanced auditing.

Who should care:Enterprise & Security Teams

🧠 Deep Insight

Web-grounded analysis with 7 cited sources.

🔑 Key Takeaways

  • Microsoft Copilot bypassed data loss prevention (DLP) policies in late January 2026 (tracked as CW1226324), allowing the AI to read and summarize emails marked as confidential in Outlook's Sent Items and Drafts folders[1][3]
  • The vulnerability affected Microsoft 365 Copilot's 'work tab' chat feature, which is designed to summarize emails but failed to respect sensitivity labels that should have restricted access[1][3]
  • Microsoft confirmed the bug was caused by an unspecified code error and began rolling out fixes in early February 2026, with the issue tagged as 'advisory' indicating limited scope[3]

🛠️ Technical Deep Dive

• The CW1226324 bug specifically affected Copilot Chat's ability to process emails with confidentiality labels applied, bypassing Microsoft's DLP policies designed to protect sensitive information[1][3] • CVE-2026-21521 exploits CWE-150 (Improper Neutralization of Escape, Meta, or Control Sequences) through malicious input containing escape sequences that manipulate Copilot's parsing behavior[2] • The vulnerability requires user interaction, likely through social engineering, to process attacker-controlled content through Copilot[2] • Microsoft's response included network-level input validation, temporary functionality limitations for untrusted content, network segmentation, and enhanced monitoring on Copilot services[2] • The bug affected Copilot's interaction with Microsoft 365 apps including Word, Excel, PowerPoint, and Outlook, which began rolling out to business customers in September 2025[3]

🔮 Future ImplicationsAI analysis grounded in cited sources

These incidents highlight critical gaps in AI security architecture where violations occur within proprietary retrieval pipelines, bypassing traditional security tools like EDR and WAF. Organizations embedding generative AI into data security operations (82% according to Microsoft's 2026 Data Security Index) face increased risk if AI systems themselves become attack vectors. The pattern of repeated Copilot vulnerabilities—from 2024 Copilot Studio exploits to 2026 DLP bypasses—suggests that AI security requires fundamentally different approaches than traditional application security, potentially driving demand for specialized AI governance solutions and stricter controls on AI access to sensitive data repositories.

⏳ Timeline

2024-08
Security researcher Michael Bargury demonstrates at Black Hat USA 2024 that Copilot Studio bots can exfiltrate sensitive enterprise data by circumventing existing controls through insecure defaults and over-permissive plugins
2025-09
Microsoft begins rolling out Copilot Chat to Microsoft 365 business customers, enabling content-aware interactions with Office 365 applications
2026-01
Microsoft discovers CW1226324 bug in late January allowing Copilot to bypass DLP policies and read confidential emails in Sent Items and Drafts folders
2026-01
CVE-2026-21521 information disclosure vulnerability published on January 22, 2026, affecting Microsoft Copilot through improper neutralization of escape and control sequences
2026-02
Microsoft begins rolling out fixes for CW1226324 in early February 2026 with worldwide deployment for enterprise customers

📎 Sources (7)

Factual claims are grounded in the sources below. Forward-looking analysis is AI-generated interpretation.

  1. cybernews.com
  2. sentinelone.com
  3. tomsguide.com
  4. youtube.com
  5. microsoft.com
  6. learn.microsoft.com
  7. microsoft.com

Microsoft Copilot ignored sensitivity labels and DLP policies twice in eight months, accessing and summarizing confidential emails undetected. Incidents included a four-week bug affecting the UK's NHS and a prior zero-click EchoLeak exploit. Traditional tools like EDR and WAF failed due to violations occurring in Copilot's internal retrieval pipeline.

Key Points

  • 1.Four-week Jan bug (CW1226324) let Copilot process Sent Items/Drafts despite labels
  • 2.June 2025 CVE-2025-32711 EchoLeak enabled zero-click data exfiltration via malicious email
  • 3.Affected regulated orgs like UK NHS (INC46740412)
  • 4.No DLP/EDR/WAF detected as violations stayed in Microsoft's retrieval pipeline

Impact Analysis

Enterprises risk undetected leaks of sensitive data in AI assistants, especially in healthcare. Exposes gaps in legacy security for LLM pipelines, prompting need for AI-specific monitoring.

Technical Details

CW1226324 stemmed from code-path error allowing labeled Sent Items/Drafts into retrieval. EchoLeak bypassed prompt injection classifier, link redaction, and CSP for silent exfiltration. Both occurred between retrieval index and generation model, invisible to perimeter tools.

📰

Weekly AI Recap

Read this week's curated digest of top AI events →

👉Read Next

AI-curated news aggregator. All content rights belong to original publishers.
Original source: VentureBeat