{"data":{"id":"3989c99f-4f4b-4e3e-aef5-a19c105cd2d6","title":"Safeguarding Federated Learning From Data Reconstruction Attacks via Gradient Dropout","summary":"Federated learning (collaborative model training where participants share only gradients, not raw data) is vulnerable to gradient inversion attacks, where adversaries reconstruct sensitive training data from the shared gradients. The paper proposes Gradient Dropout, a defense that randomly scales some gradient components and replaces others with Gaussian noise (random numerical values) to disrupt reconstruction attempts while maintaining model accuracy.","solution":"Gradient Dropout is applied as a defense mechanism: it perturbs gradients by randomly scaling a subset of components and replacing the remainder with Gaussian noise, applied across all layers of the model. According to the source, this approach yields less than 2% accuracy reduction relative to baseline while significantly impeding reconstruction attacks.","labels":["research","security"],"sourceUrl":"http://ieeexplore.ieee.org/document/11367738","publishedAt":"2026-01-29T13:24:04.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["data_extraction"],"issueType":"research","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":[],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":"2026-01-29T13:24:04.000Z","capecIds":null,"crossRefCount":0,"attackSophistication":"advanced","impactType":["confidentiality"],"aiComponentTargeted":"training_data","llmSpecific":false,"classifierConfidence":0.92,"researchCategory":"peer_reviewed","atlasIds":null}}