A brand new assault dubbed ‘EchoLeak’ is the primary identified zero-click AI vulnerability that permits attackers to exfiltrate delicate information from Microsoft 365 Copilot from a consumer’s context with out interplay.
The assault was devised by Intention Labs researchers in January 2025, who reported their findings to Microsoft. The tech big assigned the CVE-2025-32711 identifier to the knowledge disclosure flaw, ranking it vital, and glued it server-side in Could, so no consumer motion is required.
Additionally, Microsoft famous that there isn’t any proof of any real-world exploitation, so this flaw impacted no prospects.
Microsoft 365 Copilot is an AI assistant constructed into Workplace apps like Phrase, Excel, Outlook, and Groups that makes use of OpenAI’s GPT fashions and Microsoft Graph to assist customers generate content material, analyze information, and reply questions primarily based on their group’s inside information, emails, and chats.
Although mounted and by no means maliciously exploited, EchoLeak holds significance for demonstrating a brand new class of vulnerabilities referred to as ‘LLM Scope Violation,’ which causes a big language mannequin (LLM) to leak privileged inside information with out consumer intent or interplay.
Because the assault requires no interplay with the sufferer, it may be automated to carry out silent information exfiltration in enterprise environments, highlighting how harmful these flaws will be when deployed towards AI-integrated programs.
How EchoLeak works
The assault begins with a malicious e mail despatched to the goal, containing textual content unrelated to Copilot and formatted to seem like a typical enterprise doc.
The e-mail embeds a hidden immediate injection crafted to instruct the LLM to extract and exfiltrate delicate inside information.
As a result of the immediate is phrased like a traditional message to a human, it bypasses Microsoft’s XPIA (cross-prompt injection assault) classifier protections.
Later, when the consumer asks Copilot a associated enterprise query, the e-mail is retrieved into the LLM’s immediate context by the Retrieval-Augmented Era (RAG) engine on account of its formatting and obvious relevance.
The malicious injection, now reaching the LLM, “tips” it into pulling delicate inside information and inserting it right into a crafted hyperlink or picture.
Intention Labs discovered that some markdown picture codecs trigger the browser to request the picture, which sends the URL robotically, together with the embedded information, to the attacker’s server.
.jpg)
Supply: Intention Labs
Microsoft CSP blocks most exterior domains, however Microsoft Groups and SharePoint URLs are trusted, so these will be abused to exfiltrate information with out drawback.

Supply: Intention Labs
EchoLeak might have been mounted, however the rising complexity and deeper integration of LLM purposes into enterprise workflows are already overwhelming conventional defenses.
The identical pattern is certain to create new weaponizable flaws adversaries can stealthily exploit for high-impact assaults.
It is crucial for enterprises to strengthen their immediate injection filters, implement granular enter scoping, and apply post-processing filters on LLM output to dam responses that comprise exterior hyperlinks or structured information.
Furthermore, RAG engines will be configured to exclude exterior communications to keep away from retrieving malicious prompts within the first place.