Comments on “APFed: Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning”
Summary
Researchers found a critical security flaw in APFed, a method designed to protect federated learning (a system where multiple computers train an AI model together without sharing raw data) by using additive homomorphic encryption (a math technique that lets computers do calculations on encrypted data without decrypting it). The flaw means APFed cannot actually prevent poisoning attacks (attempts to corrupt the training process by inserting bad data), despite the original authors' claims.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11430628
First tracked: April 6, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 85%