aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,678
[LAST_24H]
24
[LAST_7D]
166
Daily BriefingMonday, March 30, 2026
>

Anthropic's Unreleased Cybersecurity Model Accidentally Exposed: A configuration error leaked details of Anthropic's powerful new AI model called Mythos, designed for cybersecurity use cases with advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The leak raises concerns because the model's improved vulnerability detection could enable more sophisticated cyberattacks, prompting Anthropic to plan a phased rollout to enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where attackers insert malicious commands into input that gets executed) in its model serving code when using `env_manager=LOCAL`, allowing attackers to execute arbitrary commands by manipulating dependency information in the `python_env.yaml` file without any safety checks. (CVE-2025-15379, Critical)

Latest Intel

page 67/268
VIEW ALL
01

GHSA-jq4x-98m3-ggq6: OpenClaw Canvas Path Traversal Information Disclosure Vulnerability

security
Mar 2, 2026

OpenClaw's canvas tool contains a path traversal vulnerability (a security flaw that allows reading files outside intended directories) in its `a2ui_push` action. An authenticated attacker can supply any filesystem path to the `jsonlPath` parameter, and the gateway reads the file without validation and forwards its contents to connected nodes, potentially exposing sensitive files like credentials or SSH keys.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Flaws in AI Agent Frameworks: CrewAI has several vulnerabilities including Docker fallback issues that enable RCE (remote code execution, where attackers run commands on systems they don't control) when containerization fails (CVE-2026-2287, CVE-2026-2275), while OpenClaw suffers from malicious plugin code execution during installation and sandbox bypass flaws that let agents access other agents' workspaces. SakaDev and HAI Build Code Generator can both be tricked through prompt injection (hiding malicious instructions in normal-looking input) to misclassify dangerous terminal commands as safe and execute them automatically (CVE-2026-30306, CVE-2026-30308).

>

ChatGPT Data Leakage Vulnerability Patched: OpenAI fixed a vulnerability that allowed attackers to secretly extract sensitive user data including conversation messages and uploaded files by exploiting a hidden DNS-based communication channel (covert data transmission using the Domain Name System) in ChatGPT's Linux runtime, bypassing all safety guardrails designed to prevent unauthorized data sharing.

GitHub Advisory Database
02

Anthropic upgrades Claude’s memory to attract AI switchers

industry
Mar 2, 2026

Anthropic has updated Claude to make switching from other AI chatbots easier by adding memory features to the free plan and creating tools to import user data from competitors like ChatGPT and Gemini. These updates let users transfer the context and conversation history their previous AI already knows about them, so they don't have to re-teach Claude the same information.

The Verge (AI)
03

GHSA-vmwq-8g8c-jm79: OpenChatBI has a Path Traversal Vulnerability in save_report Tool

security
Mar 2, 2026

OpenChatBI has a path traversal vulnerability (a security flaw where attackers can access files outside intended directories) in its save_report tool because it doesn't properly validate the file_format parameter, allowing attackers to use sequences like '/../' to write files to arbitrary locations and potentially execute malicious code.

Fix: Upgrade to version 0.2.2 or later, which includes the fix from PR #12.

GitHub Advisory Database
04

CVE-2026-2256: A command injection vulnerability in ModelScope's ms-agent versions v1.6.0rc1 and earlier exists, allowing an attacker t

security
Mar 2, 2026

CVE-2026-2256 is a command injection vulnerability (a flaw where an attacker tricks a program into running unwanted operating system commands) in ModelScope's ms-agent software versions v1.6.0rc1 and earlier. An attacker can exploit this by sending specially crafted prompts to execute arbitrary commands on the affected system.

NVD/CVE Database
05

Anthropic’s AI model Claude gets popularity boost after US military feud

industry
Mar 2, 2026

Claude, an AI model made by Anthropic, became more popular after the Pentagon rejected it due to ethics concerns and chose OpenAI's ChatGPT instead for classified military networks. Claude reached the top spot on Apple's US app store chart shortly after this decision, showing that public interest in the model increased following the military conflict.

The Guardian Technology
06

Apple might use Google servers to store data for its upgraded AI Siri

industry
Mar 2, 2026

Apple is exploring using Google's servers to store data for an upgraded version of Siri that runs on Google's Gemini AI models (a large language model created by Google). This represents a deeper partnership between Apple and Google than previously announced, as Apple works to catch up in AI capabilities while maintaining its privacy standards.

The Verge (AI)
07

Users are ditching ChatGPT for Claude. Here’s how to make the switch

industry
Mar 2, 2026

Many users are switching from ChatGPT to Claude, an AI assistant made by Anthropic, following controversies over OpenAI's partnership with the Pentagon for potential military use. Claude has surged in popularity, with the company reporting record sign-ups and a 60% jump in free users since January. The article provides a guide for switching, including how to export your ChatGPT data, import it into Claude, and permanently delete your ChatGPT account.

Fix: To transfer your data from ChatGPT to Claude: (1) In ChatGPT Settings, go to Personalization > Memory > Manage to review and copy your stored preferences, or go to Settings > Data Controls > Export Data to download your chat history as text or JSON files. (2) In Claude, go to Settings > Capabilities and turn on Memory. (3) Start a new conversation and paste your information using a prompt like 'Here's some important context I'd like you to remember. Update your memory about me with this.' or ask Claude to 'Review this and summarize my key preferences' for exported chat files. (4) To delete your ChatGPT account completely: go to Settings > Personalization > Memory and delete stored memory, type 'Delete all my memory and personalized data' in a final chat command, then navigate to account management settings to delete your account entirely.

TechCrunch
08

OpenAI’s “compromise” with the Pentagon is what Anthropic feared

policysecurity
Mar 2, 2026

OpenAI announced a deal allowing the US military to use its AI technology in classified settings, claiming it includes protections against autonomous weapons and mass surveillance, unlike Anthropic's rejected negotiations. However, legal experts note that OpenAI's agreement relies on the assumption that the government will follow existing laws and policies, rather than giving the Pentagon explicit prohibitions like Anthropic had proposed, meaning the military can still use the technology for any lawful purpose.

MIT Technology Review
09

Tech workers urge DOD, Congress to withdraw Anthropic label as a supply-chain risk

policyindustry
Mar 2, 2026

The Department of Defense has designated Anthropic (an AI company) as a "supply-chain risk" after the company refused to give the military unrestricted access to its AI systems, specifically declining to allow mass surveillance of Americans or autonomous weapons that can fire without human oversight. Hundreds of tech workers from major firms have signed an open letter opposing this designation, arguing it punishes the company for declining a contract and sets a dangerous precedent that could force other companies to accept government demands or face retaliation. The designation is not yet final, as the government must complete a risk assessment and notify Congress before it takes effect, and Anthropic says it will challenge the designation in court.

TechCrunch
10

New Chrome Vulnerability Let Malicious Extensions Escalate Privileges via Gemini Panel

security
Mar 2, 2026

Google Chrome had a security flaw (CVE-2026-0628, a CVSS score of 8.8, which measures vulnerability severity from 0-10) that allowed malicious browser extensions to gain unauthorized access to the Gemini Live panel, a built-in AI assistant, and perform privileged actions like accessing cameras, microphones, and local files. The vulnerability was caused by insufficient policy enforcement in the WebView tag (a component that displays web content), which let attackers inject malicious code into pages that should have been protected.

Fix: Google patched the vulnerability in Chrome version 143.0.7499.192/.193 for Windows/Mac and 143.0.7499.192 for Linux in early January 2026.

The Hacker News
Prev1...6566676869...268Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026