FUBA: Backdoor Federated Learning via Federated Unlearning
Summary
Researchers discovered a new attack called FUBA (federated unlearning backdoor attack) that exploits a privacy feature in federated learning (a technique where multiple parties train an AI model together without sharing their raw data). The attack uses malicious unlearning requests, which are supposed to let participants remove their data from a trained model, to secretly inject backdoors (hidden harmful behaviors) into the model instead. The attack is difficult to detect because it hides from existing security defenses.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11231135
First tracked: April 30, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 85%