{"data":{"id":"2ef5db1c-c7e3-4d0d-9d29-4ec8eb511c7b","title":"Model Inversion Attack Against Federated Unlearning","summary":"Researchers discovered a new attack called federated unlearning inversion attack (FUIA) that can extract private data from federated unlearning (FU, a process designed to remove a specific person's data influence from shared machine learning models across multiple computers). The attack works by having a malicious server observe the model's parameter changes during the unlearning process and reconstruct the forgotten data, undermining the privacy protection that FU is supposed to provide.","solution":"The source mentions that 'two potential defense strategies that introduce a trade-off between privacy protection and model performance' were explored, but no specific details, names, or implementations of these defense strategies are provided in the text.","labels":["security","privacy"],"sourceUrl":"http://ieeexplore.ieee.org/document/11400570","publishedAt":"2026-02-19T13:16:26.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["data_extraction"],"issueType":"research","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":[],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":"2026-02-19T13:16:26.000Z","capecIds":null,"crossRefCount":0,"attackSophistication":"advanced","impactType":["confidentiality"],"aiComponentTargeted":"training_data","llmSpecific":false,"classifierConfidence":0.92,"researchCategory":"peer_reviewed","atlasIds":null}}