One Trigger, Multiple Victims: Clean-Label Neighborhood Backdoor Attacks on Graph Neural Networks
Summary
Researchers discovered a new backdoor attack (a security flaw where hidden malicious code is planted in training data) on Graph Neural Networks, or GNNs (AI models designed to understand interconnected data). The attack uses a single trigger node (a specially crafted fake data point) attached to a target node to trick the GNN into making wrong predictions not just on that node, but also on its immediate neighbors, while remaining stealthy and achieving over 95% success rates even against existing defenses.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11457041
First tracked: April 2, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 92%