The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Independent research. No sponsors, no paywalls, no conflicts of interest.
No new AI/LLM security issues were identified today.
BlindU is a method that allows users to remove their data's influence from trained AI models while keeping that data hidden from the server. Instead of uploading raw data to the server (which creates privacy risks), BlindU lets users create compressed versions of their data locally, and the server performs the removal process only on these compressed versions, making it practical for federated learning (a distributed training setup where data stays on users' devices).
Fix: BlindU implements unlearning through several stated mechanisms: (1) 'the user locally generates privacy-preserving representations, and the server performs unlearning solely on these representations and their labels', (2) use of an information bottleneck mechanism that 'learns representations that distort maximum task-irrelevant information from inputs', (3) 'two dedicated unlearning modules tailored explicitly for IB-based models and uses a multiple gradient descent algorithm to balance forgetting and utility retaining', and (4) 'a noise-free differential privacy masking method to deal with the raw erasing data before compressing' for additional privacy protection.
IEEE Xplore (Security & AI Journals)