{"data":{"id":"e6ccea14-7cc6-436c-8eb6-cf4230c5b54c","title":"Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks","summary":"Federated Learning (FL, a method where multiple computers train an AI model together without sharing raw data) can leak private information through gradient inversion attacks (GIA, techniques that reconstruct sensitive data from the mathematical updates used in training). This paper reviews three types of GIA methods and finds that while optimization-based GIA is most practical, generation-based and analytics-based GIA have significant limitations, and proposes a three-stage defense pipeline for FL frameworks.","solution":"The source mentions 'a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection,' but does not explicitly describe what this pipeline contains or how to implement it.","labels":["security","research"],"sourceUrl":"http://ieeexplore.ieee.org/document/11311346","publishedAt":"2025-12-22T13:17:30.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["data_extraction","model_poisoning"],"issueType":"research","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":[],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":null,"capecIds":null,"crossRefCount":0,"attackSophistication":"advanced","impactType":["confidentiality","integrity"],"aiComponentTargeted":"training_data","llmSpecific":false,"classifierConfidence":0.92,"researchCategory":"peer_reviewed","atlasIds":null}}