{"data":{"id":"a9bc1388-c7f7-4cb9-b8e8-84c0f74cd457","title":"When prompts become shells: RCE vulnerabilities in AI agent frameworks","summary":"AI agent frameworks like Semantic Kernel, LangChain, and CrewAI let AI models control tools and plugins (software add-ons that perform actions like running scripts or accessing databases), but researchers discovered that prompt injection (tricking an AI by hiding instructions in its input) can turn into RCE (remote code execution, where an attacker runs commands on a system they don't own). Two critical vulnerabilities in Microsoft's Semantic Kernel (CVE-2026-25592 and CVE-2026-26030) could allow attackers to execute code on a host machine through malicious prompts.","solution":"The source states that the two vulnerabilities in Semantic Kernel \"have since been fixed\" but does not provide specific patch versions, mitigation steps, or technical details on how to address the vulnerabilities. The text mentions \"responsible disclosure\" and working with maintainers but does not explicitly describe how to patch or mitigate these issues. N/A -- no explicit mitigation or patch version details discussed in source.","labels":["security","research"],"sourceUrl":"https://www.microsoft.com/en-us/security/blog/2026/05/07/prompts-become-shells-rce-vulnerabilities-ai-agent-frameworks/","publishedAt":"2026-05-07T20:22:39.000Z","cveId":null,"cweIds":null,"cvssScore":null,"cvssSeverity":null,"severity":"high","attackType":["prompt_injection","model_poisoning"],"issueType":"news","affectedPackages":null,"affectedVendors":["Microsoft","LangChain"],"affectedVendorsRaw":["Microsoft Semantic Kernel","LangChain","CrewAI","AI agent frameworks"],"classifierModel":"claude-haiku-4-5-20251001","classifierPromptVersion":"v3","cvssVector":null,"attackVector":null,"attackComplexity":null,"privilegesRequired":null,"userInteraction":null,"exploitMaturity":null,"epssScore":null,"patchAvailable":null,"disclosureDate":"2026-05-07T20:22:39.000Z","capecIds":null,"crossRefCount":0,"attackSophistication":"moderate","impactType":["integrity","availability"],"aiComponentTargeted":"agent","llmSpecific":true,"classifierConfidence":0.95,"researchCategory":null,"atlasIds":null}}