Defending Against Patch-Based and Texture-Based Adversarial Attacks With Spectral Decomposition
Summary
Adversarial examples (inputs crafted to fool AI systems) are a serious security risk for deep neural networks (AI systems with many layers), especially in physical-world attacks like fooling object detection in surveillance cameras. This research proposes Adversarial Spectrum Defense (ASD), a defense method that uses spectral decomposition (breaking down data into different frequency components) via Discrete Wavelet Transform (a mathematical technique to analyze patterns at multiple scales) to detect and defend against patch-based and texture-based adversarial attacks, and shows it achieves better protection when combined with Adversarial Training (training the AI on attack examples to make it more robust).
Solution / Mitigation
The source proposes Adversarial Spectrum Defense (ASD), which 'leverages spectral decomposition via Discrete Wavelet Transform (DWT) to analyze adversarial patterns across multiple frequency scales' and 'by integrating this spectral analysis with the off-the-shelf Adversarial Training (AT) model, ASD provides a comprehensive defense strategy against both patch-based and texture-based adversarial attacks.' The paper reports that 'ASD+AT achieved state-of-the-art (SOTA) performance against various attacks, outperforming the APs of previous defense methods by 21.73%'.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11482237
First tracked: April 30, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 85%