AMF-CFL: Anomaly model filtering based on clustering in federated learning
Summary
Federated learning (a system where multiple participants train a shared AI model without sharing their raw data) is vulnerable to attacks from malicious clients who send harmful model updates. This paper proposes AMF-CFL, a defense method that uses multi-k means clustering (a technique for grouping similar data points) and z-score statistical analysis (a way to identify unusual values) to filter out malicious updates and protect the global model, even when clients have non-i.i.d. data distributions (when each participant's data differs significantly in type and quantity).
Solution / Mitigation
AMF-CFL reduces the influence of malicious updates through a two-step filtering strategy: it first applies multi-k means clustering to identify anomalous update patterns, followed by z-score-based statistical analysis to refine the selection of benign updates.
Classification
Related Issues
Original source: https://www.sciencedirect.com/science/article/pii/S2214212626000177?dgcid=rss_sd_all
First tracked: March 16, 2026 at 04:12 PM
Classified by LLM (prompt v3) · confidence: 85%