Exploring Visual Explanations for Defending Federated Learning against Poisoning Attacks: Enhancing LayerCAM with Autoencoders
Summary
This research paper examines how visual explanation techniques can help protect federated learning (a machine learning approach where multiple computers train a model together without sharing raw data) from poisoning attacks (attempts to corrupt the training data or model). The authors propose an enhanced version of LayerCAM (a method that visualizes which parts of an input an AI focuses on), combined with autoencoders (neural networks that compress and reconstruct data), to detect and defend against such attacks.
Classification
Related Issues
Original source: https://dl.acm.org/doi/abs/10.1145/3799892?ai=2p1&mi=hx017f&af=R
First tracked: April 10, 2026 at 02:00 PM
Classified by LLM (prompt v3) · confidence: 85%