CVE-2025-32711: M365 Copilot 'EchoLeak' Zero-Click IPI
Executive Summary
Section titled “Executive Summary”As detailed by researchers at Checkmarx and HackTheBox, CVE-2025-32711 exposes the severe consequences of integrating Large Language Models (LLMs) with broad data access (Microsoft Graph) without strict isolation.
EchoLeak leverages a flaw in how M365 Copilot sanitizes external inputs before processing them through its internal orchestration layer. By embedding malicious instructions in a benign-looking document, an attacker forces Copilot to become a “Confused Deputy.” Once triggered, the AI agent abuses its authorized access to read the user’s emails, Teams chats, and SharePoint files, and silently exfiltrates them to an external server.
Technical Analysis
Section titled “Technical Analysis”The vulnerability is rooted in the convergence of two concepts we have extensively documented in the Codex: Indirect Prompt Injection (IPI) and the lack of strict Output Sanitization.
In a standard Copilot workflow, the AI uses Microsoft Graph to retrieve context (RAG) to assist the user. EchoLeak weaponizes this retrieval phase. The attacker crafts a payload that uses psychological manipulation on the LLM (e.g., instructing it to act as a diagnostic tool) combined with a technical exfiltration vector.
Since Copilot supports Markdown rendering in its chat interface to display rich text and images, attackers abused this feature. The poisoned prompt forces the LLM to generate a Markdown image tag (). When the Copilot web or desktop client renders this hidden image, the HTTP GET request is fired, effectively leaking the appended sensitive data to the attacker’s server.
Exploitation Flow (Zero-Click Chain)
Section titled “Exploitation Flow (Zero-Click Chain)”The most alarming aspect of EchoLeak is the “Zero-Click” vector. The victim does not need to explicitly ask Copilot about the malicious document.
- Delivery: The attacker sends a seemingly harmless email (or shares a SharePoint document) containing the hidden EchoLeak payload.
- Autonomous Indexing: The victim opens their M365 dashboard. Copilot autonomously scans recent emails to generate the “Catch up on your day” summary or meeting prep notes.
- Payload Execution: Copilot ingests the malicious email. The embedded prompt injection overrides the summary instruction.
- Data Harvesting: The payload instructs Copilot to use its Graph API access to search for “Password”, “Confidential”, or “Financial” in the victim’s inbox.
- Silent Exfiltration: Copilot formats the stolen data into a Markdown image URL and outputs it. The client renders the invisible image, executing the data exfiltration via a DNS/HTTP callback.
Forensic Investigation (DFIR)
Section titled “Forensic Investigation (DFIR)”Because this vulnerability executes entirely within Microsoft’s SaaS infrastructure, traditional host-based forensic artifacts (like memory dumps or local process trees) are unavailable. DFIR analysts must pivot entirely to Cloud Forensics, specifically targeting the Unified Audit Log (UAL) and Microsoft Purview.
- Copilot Interaction Events: Analysts must query the
CopilotInteractionevents within the UAL. While Microsoft redacts the exact prompts for privacy reasons, metadata about the interaction time and the files accessed by the Copilot agent during that session are visible. - Anomalous Graph API Access: Look for spikes in Microsoft Graph API read events originating from the Copilot service account on behalf of a user, especially if those reads target highly sensitive SharePoint sites immediately after the user received an external email.
Detection & Threat Hunting
Section titled “Detection & Threat Hunting”While Microsoft patched the specific Markdown rendering flaw, the underlying threat of IPI remains. SOC teams must hunt for anomalous AI behavior.
// Hunts for Copilot interactions followed by suspicious network connections// indicating potential Markdown-based data exfiltration.let CopilotEvents = CloudAppEvents| where Application == "Microsoft 365 Copilot"| where ActionType == "CopilotInteraction"| project TimeGenerated, AccountObjectId, IPAddress;
DeviceNetworkEvents| where TimeGenerated > ago(7d)| join kind=inner (CopilotEvents) on $left.InitiatingProcessAccountObjectId == $right.AccountObjectId// Look for network connections occurring within 60 seconds of a Copilot interaction| where datetime_diff('second', TimeGenerated, TimeGenerated1) between (0 .. 60)// Filter out legitimate Microsoft domains| where RemoteUrl !contains "microsoft.com" and RemoteUrl !contains "office.com"| where RemoteUrl contains "=" // Often used in exfiltration queries ?data=...| project TimeGenerated, AccountObjectId, RemoteUrl, RemoteIPtitle: M365 Copilot Anomalous Prompt Behavior (EchoLeak)id: 5b6c3d82-c9a4-4f81-8b06-b51f7d1a2e99status: experimentaldescription: Detects semantic anomalies in Cloud logs indicating potential prompt injection or unauthorized data aggregation instructions.logsource: product: m365 category: copilot_logsdetection: selection: RawEventData|contains|any: - ' labels (Microsoft Purview Information Protection) to sensitive documents to prevent Copilot from accessing them, mitigating the impact of any future “Zero-Click” agent vulnerabilities.
- Phishing Defense: Treat IPI as a highly advanced phishing vector. Reinforce defenses detailed in the Business Email Compromise (BEC) Playbook.
Sources & References
Section titled “Sources & References”- HackTheBox Blog: CVE-2025-32711: EchoLeak Copilot Vulnerability
- Securiti.ai: EchoLeak: How Indirect Prompt Injections Exploit AI Layer
- Checkmarx: EchoLeak CVE-2025-32711: AI Security is Challenging
- SOCPrime: CVE-2025-32711 Zero-Click AI Vulnerability
- Related Research: Indirect Prompt Injection: The XSS of the AI Era