Optimizing stealthiness in universal adversarial perturbations via class-selective and perceptual similarity metrics
Summary
Universal Adversarial Perturbations (UAPs, tiny modifications to images that fool AI models across many different inputs) are security threats to deep learning systems, but existing methods make attacks obvious because they either look wrong to humans or cause suspicious misclassifications. This paper presents Stealthy-UAP, a framework that makes UAPs harder to detect by targeting only semantically related classes (so misclassifications seem plausible) and optimizing perturbations to match how humans actually perceive images.
Classification
Related Issues
Original source: https://www.sciencedirect.com/science/article/pii/S221421262600089X?dgcid=rss_sd_all
First tracked: April 20, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 85%