ASGA: Attention-Based Sparse Global Attack to Video Action Recognition
Summary
This paper presents ASGA, a method for creating adversarial attacks (small, crafted changes meant to trick AI models) on video action recognition systems (AI models that identify what actions people are performing in videos). The key innovation is that attackers can compute perturbations (the malicious changes) just once on important keyframes (selected frames that represent the video's content), then replicate these changes across the entire video, making the attack work even when the model samples frames differently and reducing computational cost.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11182617
First tracked: February 12, 2026 at 02:22 PM
Classified by LLM (prompt v3) · confidence: 85%