AdaptiveShield: Dynamic Defense Against Decentralized Federated Learning Poisoning Attacks
Summary
Federated learning (a system where decentralized devices train a shared AI model together while keeping their data local) is vulnerable to poisoning attacks, where malicious participants inject false data to corrupt the final model. This paper proposes AdaptiveShield, a defense system that uses dynamic detection strategies to identify attackers, automatically adjusts its sensitivity thresholds to handle different attack types, reduces damage from missed attackers by adjusting hyperparameters (settings that control how the model learns), and hides user identities through a shuffling mechanism to protect privacy.
Solution / Mitigation
AdaptiveShield employs: (1) dynamic detection strategies that assess maliciousness and dynamically adjust detection thresholds to adapt to various attack scenarios; (2) dynamic hyperparameter adjustment to minimize negative impact from missed attackers and enhance robustness; and (3) a hierarchical shuffle mechanism to dissociate user identities from their uploaded local models, providing privacy protection.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11288007
First tracked: March 16, 2026 at 08:02 PM
Classified by LLM (prompt v3) · confidence: 85%