Machine Learning Attack Series: Backdooring Keras Models and How to Detect It
Summary
This post examines how attackers can insert hidden malicious code into machine learning models (a technique called backdooring) through supply chain attacks, specifically targeting Keras models (a popular framework for building AI systems). The authors demonstrate this attack and then explore tools that can detect when a model has been compromised in this way.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2024/machine-learning-attack-series-keras-backdoor-model/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 75%