Who Am I? Conditional Prompt Injection Attacks with Microsoft Copilot
Summary
Attackers can create conditional prompt injection attacks (tricking an AI by hiding malicious instructions in its input that activate only for specific users) against Microsoft Copilot by leveraging user identity information like names and job titles that the AI includes in its context. A researcher demonstrated this by sending an email with hidden instructions that made Copilot behave differently depending on which person opened it, showing that LLM applications become more vulnerable as attackers learn to target specific users rather than all users equally.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/whoami-conditional-prompt-injection-instructions/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 92%