BioGuard: Malicious sample free defense method for biometric classifiers against model extraction attacks
Summary
Researchers have developed BioGuard, a defense method that protects biometric classifiers (AI systems that identify people using fingerprints, faces, or iris scans) against model extraction attacks (where attackers try to steal or copy the AI model by repeatedly querying it). The method works without needing malicious sample data to train it, making it practical for real-world deployment.
Classification
Related Issues
Original source: https://www.sciencedirect.com/science/article/pii/S0167404826000957?dgcid=rss_sd_all
First tracked: April 17, 2026 at 08:00 PM
Classified by LLM (prompt v3) · confidence: 85%