Deep Learning With Data Privacy via Residual Perturbation
Summary
This research proposes a new method for protecting data privacy in deep learning (training AI models on sensitive data) by adding Gaussian noise (random values from a bell-curve distribution) to ResNets (a type of neural network with skip connections). The method aims to provide differential privacy (a mathematical guarantee that an individual's data cannot be easily identified from the model's results) while maintaining better accuracy and speed than existing privacy-protection techniques like DPSGD (differentially private stochastic gradient descent, a slower privacy-focused training method).
Classification
Original source: http://ieeexplore.ieee.org/document/11269744
First tracked: February 12, 2026 at 02:22 PM
Classified by LLM (prompt v3) · confidence: 92%