Prompt Injection Via Road Signs
Summary
Researchers discovered a new attack called CHAI (Command Hijacking against embodied AI) that tricks AI systems controlling robots and autonomous vehicles by embedding fake instructions in images, such as misleading road signs. The attack exploits Large Visual-Language Models (LVLMs, which are AI systems that understand both images and text together) to make these embodied AI systems (robots that perceive and interact with the physical world) ignore their real commands and follow the attacker's hidden instructions instead. The researchers tested CHAI on drones, self-driving cars, and real robots, showing it works better than previous attack methods.
Classification
Affected Vendors
Related Issues
Original source: https://www.schneier.com/blog/archives/2026/02/prompt-injection-via-road-signs.html
First tracked: February 15, 2026 at 08:49 PM
Classified by LLM (prompt v3) · confidence: 85%