Practical Continual Forgetting for Pre-Trained Vision Models
Summary
This research addresses how to remove unwanted information from pre-trained vision models (AI systems trained to understand images) when users or model owners request it, especially when these deletion requests come one after another over time. The researchers propose Group Sparse LoRA (GS-LoRA), a technique that uses Low-Rank Adaptation modules (efficient add-on components that modify specific neural network layers) to selectively forget targeted classes or information while keeping the rest of the model working well, even when some training data is missing.
Solution / Mitigation
The paper proposes two explicit solutions: (1) Group Sparse LoRA (GS-LoRA), which uses Low-Rank Adaptation modules to fine-tune Feed-Forward Network layers in Transformer blocks for each forgetting task independently, combined with group sparse regularization to automatically select and zero out specific LoRA groups. (2) GS-LoRA++, an extension that incorporates prototype information as additional supervision, moving logits (output scores) away from the original prototype of forgotten classes while pulling logits closer to prototypes of remaining classes.
Classification
Original source: http://ieeexplore.ieee.org/document/11353047
First tracked: April 6, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 92%