ReSLC: Defending backdoor attacks on intelligent vulnerability detection via redundant semantic LLM compression
Summary
This research paper describes a method called ReSLC that protects AI systems used to find software bugs from backdoor attacks, where attackers secretly embed malicious instructions into the AI's training process. The approach uses redundant semantic LLM compression (a technique that removes unnecessary information from large language models while keeping their core abilities) to make these hidden attacks harder to carry out. The work was published in July 2026 in the Journal of Information Security and Applications.
Classification
Related Issues
Original source: https://www.sciencedirect.com/science/article/pii/S2214212626000608?dgcid=rss_sd_all
First tracked: April 8, 2026 at 02:01 PM
Classified by LLM (prompt v3) · confidence: 85%