aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,657
[LAST_24H]
7
[LAST_7D]
151
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Dual-Use Security Concerns: An unreleased Anthropic AI model called Mythos was accidentally exposed through a configuration error, revealing advanced reasoning and coding abilities specifically aimed at cybersecurity. The model's improved capability to find and exploit software vulnerabilities, plus its ability to autonomously fix its own code problems, could enable both more sophisticated cyberattacks and better defenses.

>

Mistral Secures $830M for European AI Data Center: French AI startup Mistral raised $830 million in debt financing to build a Paris-area data center with thousands of Nvidia GPUs (specialized chips used for AI training) to train its large language models, aiming for 200 MW of European computing capacity by 2027.

Latest Intel

page 47/266
VIEW ALL
01

Oracle is building yesterday’s data centers with tomorrow’s debt

industry
Mar 9, 2026

AI chip technology is advancing faster than data centers can be built, creating a financial risk for companies like Oracle that are investing heavily in infrastructure. OpenAI has decided not to expand its partnership with Oracle's Texas data center because it wants access to newer Nvidia chips rather than the older generation (Blackwell processors) that will be ready in a year, highlighting how quickly AI hardware becomes outdated. This mismatch is particularly risky for Oracle, which is funding its $100 billion expansion primarily through debt rather than using cash from existing profitable businesses like its competitors do.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw allows attackers to execute arbitrary commands on deployment systems by inserting malicious content into the `python_env.yaml` file, which MLflow reads and uses in shell commands without validation. (CVE-2025-15379, Critical)

CNBC Technology
02

Employees across OpenAI and Google support Anthropic’s lawsuit against the Pentagon

policy
Mar 9, 2026

Anthropic, an AI company, filed a lawsuit against the Department of Defense after being labeled a supply chain risk (a government designation suggesting a company could threaten critical systems). Nearly 40 employees from competing AI companies OpenAI and Google, including prominent figures, filed a legal support document expressing concerns about this decision and its implications for AI technology.

The Verge (AI)
03

'InstallFix' Attacks Spread Fake Claude Code Sites

security
Mar 9, 2026

Attackers are running a campaign called 'InstallFix' that uses malvertising (ads serving malware) combined with ClickFix tactics (fake warning popups that trick users into taking action) to direct people to fake websites pretending to be Claude, an AI coding assistant. The attack exploits how developers use AI tools and command-line interfaces (text-based programs that run on computers) to execute code.

Dark Reading
04

Anthropic was the Pentagon's choice for AI. Now it's banned and experts are worried

policyindustry
Mar 9, 2026

The U.S. Defense Department banned Anthropic's AI models after a review by Pentagon technology leadership, designating the company a supply chain risk (a classification historically reserved for foreign adversaries) and requiring defense contractors to certify they don't use its technology. The decision surprised many officials who considered Anthropic's models superior and had deployed them in classified military networks, and defense experts worry it sets a troubling precedent while removing a trusted AI vendor that military personnel relied on.

CNBC Technology
05

GHSA-v359-jj2v-j536: vLLM has SSRF Protection Bypass

security
Mar 9, 2026

vLLM has a bypass in its SSRF (server-side request forgery, where an attacker tricks a server into making requests to unintended targets) protection because the validation layer and the HTTP client parse URLs differently. The validation uses urllib3, which treats backslashes as literal characters, but the actual requests use aiohttp with yarl, which interprets backslashes as part of the userinfo section. An attacker can craft a URL like `https://httpbin.org\@evil.com/` that passes validation for httpbin.org but actually connects to evil.com.

GitHub Advisory Database
06

Anthropic sues US government for calling it a risk

policy
Mar 9, 2026

Anthropic, an AI company, sued the US government after being labeled a 'supply chain risk' (a designation meaning a company's tools are considered unsafe for government use) in retaliation for refusing to remove safety restrictions on military use of its AI tools like Claude. The company argues the government's actions violate its free speech rights and are unlawful, claiming it had been negotiating compromises with the Defense Department before the administration publicly criticized the company and directed all agencies to stop using its tools.

BBC Technology
07

Anthropic launches code review tool to check flood of AI-generated code

industry
Mar 9, 2026

Anthropic launched Code Review, an AI tool that automatically checks pull requests (code change submissions for review) to catch bugs and security issues before they enter the codebase. The tool integrates with GitHub, uses multiple AI agents working in parallel to analyze code from different angles, and provides step-by-step explanations of potential problems with color-coded severity levels to help developers prioritize fixes.

Fix: Anthropic's Code Review tool is the solution presented in the source. It integrates with GitHub and automatically analyzes pull requests, leaving comments on code explaining potential issues and suggested fixes. Engineering leads can enable it to run by default for all team members. The tool focuses on logical errors (not style issues), uses color-coded severity labels (red for highest severity, yellow for potential problems, purple for issues tied to preexisting code), and provides a light security analysis. Additional customized checks can be configured based on internal best practices, with deeper security analysis available through Claude Code Security.

TechCrunch
08

OpenAI to buy cybersecurity startup Promptfoo to better safeguard AI agents

industrysecurity
Mar 9, 2026

OpenAI is acquiring Promptfoo, a cybersecurity startup that provides tools to test and secure AI systems, particularly as AI agents (autonomous programs that can take actions) become more connected to real data and systems. Promptfoo's security tools will be integrated into OpenAI's Frontier platform, and OpenAI will continue supporting Promptfoo's open-source project that helps developers test different AI prompts and compare large language models (AI systems trained on massive amounts of text data).

CNBC Technology
09

OpenAI acquires Promptfoo to secure its AI agents

securityindustry
Mar 9, 2026

OpenAI acquired Promptfoo, an AI security startup, to integrate its technology into OpenAI's enterprise platform for protecting AI agents from attacks. Promptfoo develops tools that help companies test security vulnerabilities in LLMs (large language models, the AI systems behind chatbots), addressing growing concerns that autonomous AI agents could be exploited to steal data or manipulate systems.

Fix: According to the source, Promptfoo's technology will be integrated into OpenAI Frontier to perform automated red-teaming (simulated attacks to find weaknesses), evaluate AI workflows for security concerns, and monitor activities for risks and compliance needs. OpenAI also stated it expects to continue building out Promptfoo's open source offering.

TechCrunch (Security)
10

Anthropic is suing the Department of Defense

policysafety
Mar 9, 2026

Anthropic, a major AI company, is suing the US Department of Defense after being labeled a supply-chain risk (a company whose products or services might pose security threats if compromised). The lawsuit claims the Trump administration retaliated against Anthropic for refusing to remove safety restrictions on its AI systems, particularly regarding mass surveillance and fully autonomous weapons (systems that make lethal decisions without human involvement).

The Verge (AI)
Prev1...4546474849...266Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026