Threat modeling AI applications
Summary
Threat modeling is a structured process for identifying and preparing for security risks early in system design, but AI systems require adapted approaches because they behave unpredictably in ways traditional software does not. AI systems are probabilistic (producing different outputs from the same input), treat text as executable instructions rather than just data, and can amplify failures across connected tools and workflows, creating new attack surfaces like prompt injection (tricking an AI by hiding instructions in its input) and silent data theft that traditional threat models don't address.
Classification
Related Issues
CVE-2024-37052: Deserialization of untrusted data can occur in versions of the MLflow platform running version 1.1.0 or newer, enabling
CVE-2024-27444: langchain_experimental (aka LangChain Experimental) in LangChain before 0.1.8 allows an attacker to bypass the CVE-2023-
Original source: https://www.microsoft.com/en-us/security/blog/2026/02/26/threat-modeling-ai-applications/
First tracked: February 26, 2026 at 03:00 PM
Classified by LLM (prompt v3) · confidence: 85%