Copilot and Agentforce fall to form-based prompt injection tricks
Summary
Security researchers discovered prompt injection vulnerabilities (attacks where malicious instructions are hidden in user input to trick an AI into executing them) in Microsoft Copilot Studio and Salesforce Agentforce that allow attackers to steal sensitive data like customer names, addresses, and phone numbers. Both vulnerabilities exploit the fact that these AI agents cannot distinguish between trusted system instructions and untrusted user input, allowing attackers to override the agent's original purpose and exfiltrate data to external servers.
Solution / Mitigation
Microsoft patched CVE-2026-21520 following disclosure, with the mitigation carried out internally and no further action required from users. The source notes that both vulnerabilities highlight a baseline need for treating all external inputs as untrusted and enforcing input validation, least-privilege access (giving systems only the minimum permissions they need), and strict controls on actions like outbound email, though no specific patch details are provided for the Salesforce vulnerability.
Classification
Affected Vendors
Related Issues
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
CVE-2026-30308: In its design for automatic terminal command execution, HAI Build Code Generator offers two options: Execute safe comman
Original source: https://www.csoonline.com/article/4159079/copilot-and-agentforce-fall-to-form-based-prompt-injection-tricks.html
First tracked: April 15, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 95%