{"data":{"id":"7ac88da9-91f7-41d4-847c-9e8a8bf5d23d","title":"Threat modeling AI applications","summary":"Threat modeling is a structured process for identifying and preparing for security risks early in system design, but AI systems require adapted approaches because they behave unpredictably in ways traditional software does not. AI systems are probabilistic (producing different outputs from the same input), treat text as executable instructions rather than just data, and can amplify failures across connected tools and workflows, creating new attack surfaces like prompt injection (tricking an AI by hiding instructions in its input) and silent data theft that traditional threat models don't address.","solution":"N/A -- no mitigation discussed in source.","labels":["security","safety"],"sourceUrl":"https://www.microsoft.com/en-us/security/blog/2026/02/26/threat-modeling-ai-applications/","publishedAt":"2026-02-26T17:04:08.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"info","attackType":["prompt_injection","model_poisoning","jailbreak"],"issueType":"news","affectedPackages":null,"affectedVendors":[],"affectedVendorsRaw":[],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":null,"capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":["confidentiality","integrity","safety"],"aiComponentTargeted":null,"llmSpecific":false,"classifierConfidence":0.85,"researchCategory":null,"atlasIds":null}}