GitHub Copilot Chat: From Prompt Injection to Data Exfiltration
Summary
GitHub Copilot Chat, a VS Code extension that lets users ask questions about their code by sending it to an AI model, was vulnerable to prompt injection (tricking an AI by hiding instructions in its input) attacks. When analyzing untrusted source code, attackers could embed malicious instructions in the code itself, which would be sent to the AI and potentially lead to data exfiltration (unauthorized copying of sensitive information).
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/github-copilot-chat-prompt-injection-data-exfiltration/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%