Model Stability Defense Against Model Poisoning in Federated Learning
Summary
Federated learning (a training method where multiple parties collaborate to build an AI model without sharing raw data) is vulnerable to model poisoning attacks (where attackers inject harmful updates during training to break the model). This paper proposes MSDFL and HMSDFL, new defensive approaches that strengthen models by improving their stability, meaning they become less sensitive to small changes in their internal parameters, making them more resistant to these poisoning attacks.
Solution / Mitigation
The source explicitly describes the solution: 'we introduce a new method named Model Stability Defense for Federated Learning (MSDFL), designed to fortify the defense of FL systems against model poisoning attacks. MSDFL utilizes a minmax optimization framework, which is fundamentally linked to empirical risk for exploring the effects of model perturbations. The core aim of our approach is to minimize the norm of the model-output Jacobian matrix without compromising predictive performance, thereby establishing defense through enhanced model stability.' The paper also proposes 'a refined version of MSDFL, named Holistic Model Stability Defense for Federated Learning (HMSDFL), which considers model stability across all output dimensions of the logits to effectively eradicate the disparity in model convergence speed induced by MSDFL.'
Classification
Related Issues
CVE-2024-37052: Deserialization of untrusted data can occur in versions of the MLflow platform running version 1.1.0 or newer, enabling
CVE-2024-37058: Deserialization of untrusted data can occur in versions of the MLflow platform running version 2.5.0 or newer, enabling
Original source: http://ieeexplore.ieee.org/document/11194751
First tracked: February 12, 2026 at 02:22 PM
Classified by LLM (prompt v3) · confidence: 92%