Threat modeling a machine learning system
Summary
This post explains threat modeling for machine learning systems, which is a process to systematically identify potential security attacks. The author uses Microsoft's Threat Modeling tool and STRIDE (a framework categorizing threats into spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege) to identify vulnerabilities in a machine learning system called 'Husky AI', and notes that perturbation attacks (where attackers query the model to trick it into making wrong predictions) are a particular concern for ML systems.
Classification
Affected Vendors
Related Issues
CVE-2024-37052: Deserialization of untrusted data can occur in versions of the MLflow platform running version 1.1.0 or newer, enabling
CVE-2025-45150: Insecure permissions in LangChain-ChatGLM-Webui commit ef829 allows attackers to arbitrarily view and download sensitive
Original source: https://embracethered.com/blog/posts/2020/husky-ai-threat-modeling-machine-learning/
First tracked: February 12, 2026 at 02:20 PM
Classified by LLM (prompt v3) · confidence: 85%