Assessing and Improving DNN Robustness Against Adversarial Examples From the Perspective of Fully Connected Layers
Summary
Deep neural networks (machine learning models with many layers that process information) are vulnerable to adversarial examples, which are inputs slightly modified to fool the AI into making wrong predictions. This paper proposes adding a redundant fully connected layer (a type of neural network component that connects all inputs to all outputs) with a special loss function to make these networks more robust against attacks while maintaining accuracy on normal inputs.
Solution / Mitigation
The source describes a defense mechanism but does not present it as a deployed fix or patch. It is a research proposal for a novel component (redundant fully connected layer with a cosine similarity-based loss function) that can be added to existing models. N/A -- no mitigation discussed in source.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11456181
First tracked: April 17, 2026 at 02:03 AM
Classified by LLM (prompt v3) · confidence: 85%