Adversarial Semantic and Label Perturbation Attack for Pedestrian Attribute Recognition
Summary
This research paper explores vulnerabilities in Pedestrian Attribute Recognition (PAR), a computer vision task that identifies characteristics of people in images using AI models. The authors developed both adversarial attacks (methods to fool the system with manipulated images) and a defense strategy called semantic offset defense to protect PAR systems, testing their approach on multiple datasets.
Solution / Mitigation
The paper proposes a semantic offset defense strategy to suppress the influence of adversarial attacks on pedestrian attribute recognition systems. Source code is made available at https://github.com/Event-AHU/OpenPAR.
Classification
Affected Vendors
Related Issues
Original source: http://ieeexplore.ieee.org/document/11430632
First tracked: March 20, 2026 at 08:03 AM
Classified by LLM (prompt v3) · confidence: 85%