Attackers Could Exploit AI Vision Models Using Imperceptible Image Changes
Summary
Researchers at Cisco discovered that attackers can manipulate vision-language models (AI systems that read and interpret images) by making tiny, imperceptible changes to image pixels that humans cannot see. These changes can make hidden malicious instructions embedded in images readable to the AI, allowing attackers to trick the AI into following commands like stealing data, while content filters and humans see only visual noise or blurry content.
Classification
Affected Vendors
Related Issues
Original source: https://www.securityweek.com/attackers-could-exploit-ai-vision-models-using-imperceptible-image-changes/
First tracked: May 7, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 85%