Model Inversion Attack Against Federated Unlearning
Summary
Researchers discovered a new attack called federated unlearning inversion attack (FUIA) that can extract private data from federated unlearning (FU, a process designed to remove a specific person's data influence from shared machine learning models across multiple computers). The attack works by having a malicious server observe the model's parameter changes during the unlearning process and reconstruct the forgotten data, undermining the privacy protection that FU is supposed to provide.
Solution / Mitigation
The source mentions that 'two potential defense strategies that introduce a trade-off between privacy protection and model performance' were explored, but no specific details, names, or implementations of these defense strategies are provided in the text.
Classification
Related Issues
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
CVE-2025-54868: LibreChat is a ChatGPT clone with additional features. In versions 0.0.6 through 0.7.7-rc1, an exposed testing endpoint
Original source: http://ieeexplore.ieee.org/document/11400570
First tracked: March 16, 2026 at 04:14 PM
Classified by LLM (prompt v3) · confidence: 92%