aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,678
[LAST_24H]
22
[LAST_7D]
165
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 68/268
VIEW ALL
01

Nvidia’s spending $4 billion on photonics to stay ahead of the curve in AI

industry
Mar 2, 2026

Nvidia is investing $4 billion total ($2 billion each) into two companies, Lumentum and Coherent, that develop photonics technology (devices like optical transceivers and lasers that move data using light). These technologies could make AI data centers more energy-efficient and allow faster data transfer between components, building on Nvidia's previous acquisition of Mellanox to strengthen its networking capabilities.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

The Verge (AI)
02

Anthropic's Claude sees 'elevated errors' as it tops Apple's free apps after Pentagon clash

industry
Mar 2, 2026

Anthropic's Claude AI experienced elevated errors and degraded performance on Monday, particularly affecting Claude Opus 4.6 (the latest version of their AI model). The company identified the issues and worked on fixes, with some problems on claude.ai and related services being resolved.

Fix: According to the status updates mentioned: an issue with Claude Opus 4.6 had 'a fix was in the works' as of 10:49 a.m. ET, and issues on claude.ai, console, and claude code were reported as 'resolved' as of 10:47 a.m. ET.

CNBC Technology
03

Vulnerability Allowed Hijacking Chrome’s Gemini Live AI Assistant

security
Mar 2, 2026

A security flaw in Chrome's Gemini Live feature (Google's AI assistant) could allow malicious browser extensions (add-ons that modify Chrome's behavior) to take control of the AI tool, spy on users, and steal their files. The vulnerability created a serious risk for anyone using this feature with untrusted extensions installed.

SecurityWeek
04

How Deepfakes and Injection Attacks Are Breaking Identity Verification

securitysafety
Mar 2, 2026

Deepfakes and injection attacks (where attackers substitute fake video or audio into a system's input stream) are increasingly being used to bypass identity verification systems in critical moments like bank account opening, remote hiring, and account recovery. Traditional deepfake detection alone is insufficient because attackers can either create high-quality synthetic media or completely bypass the camera sensor using injection attacks, so organizations need to validate entire identity sessions end-to-end, including device integrity and user behavior signals, rather than just checking if a face looks real.

BleepingComputer
05

Nvidia to invest $4 billion in two photonics companies

industry
Mar 2, 2026

Nvidia is investing $4 billion total ($2 billion each) in two U.S. companies, Lumentum and Coherent, that develop photonics technologies (systems using light for sensing and data transfer). These investments include multi-billion dollar purchase commitments and aim to support Nvidia's AI infrastructure expansion by securing advanced optical and laser components needed for large-scale AI data centers.

CNBC Technology
06

OpenClaw Vulnerability Allowed Websites to Hijack AI Agents

security
Mar 2, 2026

A vulnerability in OpenClaw allowed malicious websites to connect to the OpenClaw gateway (a system that manages AI agents) on localhost (a computer's own network), guess passwords through brute force attacks (trying many password combinations rapidly), and take control of AI agents. This exposed AI systems to unauthorized hijacking from untrusted websites.

SecurityWeek
07

How OpenAI caved to the Pentagon on AI surveillance

policysafety
Mar 2, 2026

OpenAI negotiated with the Pentagon to use its AI systems for military purposes, while Anthropic refused and was blacklisted for rejecting two uses: domestic mass surveillance (monitoring Americans without individual consent) and lethal autonomous weapons (AI systems that can kill targets without a human making the final decision). OpenAI's CEO claimed to have found a way to maintain safety limits in the company's military contract, though the article does not detail what those specific terms are.

The Verge (AI)
08

Anthropic’s Claude reports widespread outage

security
Mar 2, 2026

Anthropic's Claude service experienced a widespread outage on Monday morning, affecting Claude.ai and Claude Code (though the Claude API remained functional), with most users encountering errors during login. The company identified the issue was related to login and logout systems and stated it was implementing a fix, though no root cause or technical details were disclosed.

TechCrunch
09

OwnerHunter: Multilingual Website Owner Identification Powered by Large Language Model

research
Mar 2, 2026

OwnerHunter is a system that uses large language models (AI trained on vast amounts of text) to identify who owns a website by analyzing webpage content across multiple languages. It improves on older methods that struggled when webpages listed many names or were written in non-English languages, using strategies like checking multiple sources on a page and verifying results to accurately determine the true owner.

IEEE Xplore (Security & AI Journals)
10

Iran, Berkshire Hathaway earnings, OpenAI's Pentagon deal and more in Morning Squawk

industrypolicy
Mar 2, 2026

OpenAI secured a deal with the U.S. Department of Defense after the Trump administration forced federal agencies to stop using Anthropic's AI technology, citing disagreements over how the Pentagon wanted to use the artificial intelligence startup's systems. OpenAI's CEO Sam Altman stated that his company shares the same ethical boundaries (called guardrails, which are safety limits built into AI systems) as Anthropic regarding how the technology should be used.

CNBC Technology
Prev1...6667686970...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026