BlindU: Blind Machine Unlearning Without Revealing Erasing Data
Summary
BlindU is a method that allows users to remove their data's influence from trained AI models while keeping that data hidden from the server. Instead of uploading raw data to the server (which creates privacy risks), BlindU lets users create compressed versions of their data locally, and the server performs the removal process only on these compressed versions, making it practical for federated learning (a distributed training setup where data stays on users' devices).
Solution / Mitigation
BlindU implements unlearning through several stated mechanisms: (1) 'the user locally generates privacy-preserving representations, and the server performs unlearning solely on these representations and their labels', (2) use of an information bottleneck mechanism that 'learns representations that distort maximum task-irrelevant information from inputs', (3) 'two dedicated unlearning modules tailored explicitly for IB-based models and uses a multiple gradient descent algorithm to balance forgetting and utility retaining', and (4) 'a noise-free differential privacy masking method to deal with the raw erasing data before compressing' for additional privacy protection.
Classification
Original source: http://ieeexplore.ieee.org/document/11353053
First tracked: April 6, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 88%