Enhancing the Security of Large Character Set CAPTCHAs Using Transferable Adversarial Examples
Summary
Deep learning attacks have successfully cracked CAPTCHAs (automated tests that distinguish humans from bots) that use large character sets, especially those with alphabets from languages like Chinese. This paper proposes ACG (Adversarial Large Character Set CAPTCHA Generation), a framework that makes CAPTCHAs harder to attack by adding adversarial perturbations (intentional distortions that confuse AI recognition systems) through two modules: one that prevents character recognition and another that adds global visual noise, reducing attack success rates from 51.52% to 2.56%.
Solution / Mitigation
The paper proposes ACG (Adversarial Large Character Set CAPTCHA Generation) as a defense framework. According to the source, ACG uses 'a Fine-grained Generation Module, combining three novel strategies to prevent attackers from recognizing characters, and an Ensemble Generation Module to generate global perturbations in CAPTCHAs' to strengthen defense against recognition attacks and improve robustness against diverse detection architectures.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11288041
First tracked: March 16, 2026 at 10:04 PM
Classified by LLM (prompt v3) · confidence: 75%