Enhancing Adversarial Transferability With Cost-Efficient Landscape Flattening
Summary
This research paper describes a method called CLEF (Cost-efficient LandscapE Flattening) that improves adversarial transferability, which is the ability of adversarial examples (inputs deliberately crafted to fool AI models) to fool different models beyond the one they were designed for. The method works by flattening the input loss landscape (the mathematical surface showing how wrong a model's predictions are) by optimizing adversarial perturbations (small changes added to inputs) at both high-loss and low-loss points. The researchers show their approach can improve how well these adversarial examples transfer across different models while using fewer computations than previous methods.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11395656
First tracked: May 7, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 88%