Practical and Flexible Backdoor Attack Against Deep Learning Models via Shell Code Injection
Summary
Researchers have developed a new backdoor attack method called shell code injection (SCI) that can implant malicious logic into deep learning models (neural networks trained on large datasets) without needing to poison the training data. The attack uses techniques inspired by nature, like camouflage, along with trigger verification and code packaging strategies to trick models into making wrong predictions, and it can adapt its attack targets dynamically using large language models (LLMs) to make it more flexible and harder to detect.
Classification
Related Issues
Original source: http://ieeexplore.ieee.org/document/11382040
First tracked: March 16, 2026 at 04:14 PM
Classified by LLM (prompt v3) · confidence: 92%