Researchers are reporting on a critical vulnerability in Microsoft 365 Copilot, tracked as CVE-2025-32711. The flaw has been assigned a CVSS score of 9.3, and Microsoft applied server-side fixes to it in May. There is no evidence of real-world exploitation, and customers do not need to take any action.
Researchers who disclosed the attack method, dubbed EchoLeak, assigned it to a new class of vulnerabilities called 'LLM Scope Violation'. Such flaws cause a large language model (LLM) to leak privileged internal data without user intent or interaction. Threat actors could abuse the issue to extract sensitive data, such as chat histories, documents, or SharePoint content.
Analysis:
Although reported as the “first known zero-click AI vulnerability,” the underlying exploitation technique described in the report comes down to a prompt injection or command injection in AI systems.
Similar attacks have been demonstrated in research and smaller-scale AI deployments before EchoLeak. However, these previous demonstrations required explicit user engagement, or were limited in scope and impact. EchoLeak is unique in that it bypasses most user action and targets a high-profile, enterprise-grade AI product.