PPOM-Attack: A Substitute Model-Free Perturbation Prediction and Optimization Method for Black-Box Adversarial Attack Against Face Recognition
Summary
Researchers developed PPOM-Attack, a method to fool face recognition (FR) systems by generating adversarial images (slightly altered photos that trick AI into misidentifying someone). Unlike earlier attacks that used substitute models (simpler AI systems trained to mimic the target system), PPOM-Attack directly queries the real face recognition system to learn how to create effective perturbations (tiny pixel changes), achieving 21.7% higher success rates while keeping the altered images looking natural.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11406187
First tracked: March 16, 2026 at 04:14 PM
Classified by LLM (prompt v3) · confidence: 85%