AI as tradecraft: How threat actors operationalize AI
Summary
Threat actors are using AI and language models as operational tools to speed up cyberattacks across all stages, from creating phishing emails to generating malware code, while human attackers maintain control over targeting and deployment decisions. Emerging experiments with agentic AI (where models make iterative decisions with minimal human input) suggest attackers may develop more adaptive and harder-to-detect tactics in the future. Microsoft reports disrupting thousands of fraudulent accounts and partnering with industry to counter AI-enabled threats through technical protections and responsible AI practices.
Classification
Affected Vendors
Related Issues
Original source: https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/
First tracked: March 6, 2026 at 03:00 PM
Classified by LLM (prompt v3) · confidence: 92%