Rethinking Frequency Modeling: Tail-Aware Dynamic Adversarial Training for Long-Tailed Robustness
Summary
This research addresses a problem where adversarial training (a method to make AI models resistant to adversarial attacks, which are carefully crafted inputs designed to fool the model) works poorly when training data is imbalanced, meaning some classes have many examples while others have very few. The authors propose Tail-Aware Dynamic Adversarial Training (TAD-AT), which improves robustness by adjusting the training loss, attack strategy, and weight averaging to account for which classes are most vulnerable to attacks, rather than just how many examples exist per class.
Solution / Mitigation
The proposed mitigation is Tail-Aware Dynamic Adversarial Training (TAD-AT), which consists of three components: (1) a training loss that incorporates frequency- and accuracy-aware regularization to emphasize learning for vulnerable classes, (2) an attack that adjusts perturbations based on class-wise vulnerability to encourage robust feature learning, and (3) a weight average that adaptively controls the decay rate across classes to improve robust generalization and training stability. Code is available at https://github.com/bookman233/TADAT.
Classification
Original source: http://ieeexplore.ieee.org/document/11458004
First tracked: April 27, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 85%