{"data":{"id":"aecd1012-8799-41db-a445-65a7c5e59caf","title":"Privacy-Preserving Federated Learning Scheme With Mitigating Model Poisoning Attacks: Vulnerabilities and Countermeasures","summary":"Federated learning schemes (systems where multiple parties train AI models together while keeping data private) that use two servers for privacy protection were found to leak user data when facing model poisoning attacks (where malicious users deliberately corrupt the learning process). The researchers propose an enhanced framework called PBFL that uses Byzantine-robust aggregation (a method to safely combine data from untrusted sources), normalization checks, similarity measurements, and trapdoor fully homomorphic encryption (a technique for doing calculations on encrypted data without decrypting it) to protect privacy while defending against poisoning attacks.","solution":"The authors propose an enhanced privacy-preserving and Byzantine-robust federated learning (PBFL) framework that addresses the vulnerability. Key components include: a novel Byzantine-tolerant aggregation strategy with normalization judgment, cosine similarity computation, and adaptive user weighting; a dual-scoring trust mechanism and outlier suppression for detecting stealthy attacks; and two privacy-preserving subroutines (secure normalization judgment and secure cosine similarity measurement) that operate over encrypted gradients using a trapdoor fully homomorphic encryption scheme. According to theoretical analyses and experiments, this scheme guarantees security, convergence, and efficiency even with malicious users and one malicious server.","labels":["security","research"],"sourceUrl":"http://ieeexplore.ieee.org/document/11190009","publishedAt":"2025-10-02T13:18:52.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["model_poisoning","data_extraction"],"issueType":"research","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":[],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":null,"capecIds":null,"crossRefCount":0,"attackSophistication":"advanced","impactType":["confidentiality","integrity"],"aiComponentTargeted":"training_data","llmSpecific":false,"classifierConfidence":0.92,"researchCategory":"peer_reviewed","atlasIds":null}}