Adversarial Training for Graph Neural Networks via Graph Subspace Energy Optimization
inforesearchPeer-Reviewed
research
Source: IEEE Xplore (Security & AI Journals)February 19, 2026
Summary
Graph neural networks (GNN, a type of AI that learns from data organized as interconnected nodes and edges) are vulnerable to adversarial topology perturbation, which means attackers can fool them by slightly changing the graph structure. This paper proposes AT-GSE, a new adversarial training method (a technique that strengthens AI models by training them on intentionally corrupted inputs) that uses graph subspace energy, a measure of how stable a graph is, to improve GNN robustness against these attacks.
Classification
Attack SophisticationAdvanced
Impact (CIA+S)
integrity
AI Component TargetedModel
Original source: http://ieeexplore.ieee.org/document/11400575
First tracked: March 16, 2026 at 04:14 PM
Classified by LLM (prompt v3) · confidence: 92%