SLeak: Multi-Target Privacy Stealing Attack Against Split Learning
Summary
Split Learning (SL) is a distributed learning framework designed to preserve privacy while reducing computational load, but researchers discovered a new attack called SLeak that allows a server adversary to steal client data and models. The attack works by exploiting information in the smashed data (intermediate data passed between clients and server) and server model to build a substitute client that mimics the target client's behavior, without needing strong privacy assumptions or much auxiliary data. The study shows SLeak is more effective than previous attacks across different datasets and scenarios.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11353031
First tracked: April 6, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 85%