Octopus: A Robust and Privacy-Preserving Scheme for Compressed Gradients in Federated Learning
Summary
Federated learning (a way for multiple parties to train an AI model together without sharing their raw data with a central server) normally requires many communication rounds that waste bandwidth and can leak private information. Existing compression methods reduce communication but ignore privacy risks and fail when some clients disconnect. Octopus addresses these issues by using Sketch (a data compression technique) to compress gradients (the direction and size of updates to a model), adding protective masks around the compressed data, and including a strategy to handle disconnected clients.
Solution / Mitigation
Octopus employs Sketch to compress gradients and embeds masks for the compressed gradients to safeguard them while reducing communication overhead. The scheme proposes an anti-disconnection strategy to support model updates even when some clients are disconnected.
Classification
Original source: http://ieeexplore.ieee.org/document/11194741
First tracked: February 12, 2026 at 02:22 PM
Classified by LLM (prompt v3) · confidence: 88%