Using threat modeling and prompt injection to audit Comet
Summary
Researchers tested Perplexity's Comet browser (an AI-powered web browser with an AI assistant) for security vulnerabilities and discovered four prompt injection techniques (tricks to make an AI follow hidden malicious instructions) that could steal users' private emails from Gmail. The vulnerabilities occurred because the browser's AI assistant treated external web content as trusted input instead of viewing it as potentially dangerous, allowing attackers to manipulate the assistant into extracting private data.
Solution / Mitigation
The source does not describe a specific fix or mitigation. It states 'If you want to learn more about how Perplexity addressed these findings, please see their corresponding blog post and research paper on addressing prompt injection within AI browser agents,' but the actual solutions are not detailed in this document. N/A -- specific mitigation details not provided in this source.
Classification
Affected Vendors
Related Issues
Original source: https://blog.trailofbits.com/2026/02/20/using-threat-modeling-and-prompt-injection-to-audit-comet/
First tracked: February 20, 2026 at 03:00 PM
Classified by LLM (prompt v3) · confidence: 92%