aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,669
[LAST_24H]
19
[LAST_7D]
162
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Cybersecurity Concerns: An accidental configuration leak revealed Anthropic's unreleased Mythos model, which has advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The model's improved capability to find and exploit software vulnerabilities could enable more sophisticated cyberattacks, prompting Anthropic to plan a cautious rollout targeting enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw reads dependency information from `python_env.yaml` and executes it in a shell without validation, allowing arbitrary command execution on deployment systems. (CVE-2025-15379, critical severity)

Latest Intel

page 53/267
VIEW ALL
01

Google joins Microsoft in telling users Anthropic is still available outside defense projects

policyindustry
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Vulnerabilities Found in CrewAI: CrewAI has several serious security flaws including two that enable RCE (remote code execution, where attackers run commands on systems they don't control) when Docker containerization fails and the system falls back to less secure sandbox settings. Additional vulnerabilities allow arbitrary file reading and SSRF (server-side request forgery, tricking a server into making unwanted requests) through improper validation in RAG search tools. (CVE-2026-2287, CVE-2026-2275, CVE-2026-2285, CVE-2026-2286)

>

LangChain Path Traversal Adds to AI Pipeline Security Woes: LangChain and LangGraph have critical flaws allowing attackers to steal sensitive data like API keys through improper input handling, including a new path traversal bug (CVE-2026-34070, CVSS 7.5) that lets attackers read arbitrary files. Maintainers have released fixes that need immediate application.

Mar 6, 2026

Google and Microsoft announced they will continue offering Anthropic's Claude AI models to their cloud customers for non-defense work, after the U.S. Defense Department designated Anthropic as a supply chain risk (a company that poses potential security or operational threats to government operations). The announcements came after the Trump administration instructed federal agencies to stop using Anthropic's technology, but the companies determined that non-defense projects are still permitted under this designation.

CNBC Technology
02

Microsoft, Google, Amazon say Anthropic Claude remains available to non-defense customers

policy
Mar 6, 2026

The U.S. Department of Defense designated Anthropic (maker of Claude AI) as a supply-chain risk after the company refused to provide unrestricted access for military applications like mass surveillance and autonomous weapons. Microsoft, Google, and AWS confirmed that Claude will remain available to non-defense customers through their platforms, and the designation only restricts direct Department of Defense use, not broader commercial applications.

TechCrunch
03

Is the Pentagon allowed to surveil Americans with AI?

policysecurity
Mar 6, 2026

The Pentagon and AI companies are in a dispute over whether existing U.S. law allows the government to use AI to analyze bulk commercial data collected from Americans for surveillance purposes. Legal experts point out that current law has a major gap: public information, commercial data (like location and browsing records), and information accidentally collected during foreign surveillance are not legally considered "surveillance," so the government can use them without warrants or court orders, even as AI makes this surveillance much more powerful than before.

MIT Technology Review
04

Anthropic’s Claude found 22 vulnerabilities in Firefox over two weeks

securityresearch
Mar 6, 2026

Anthropic used Claude Opus 4.6 (an advanced AI model) to test Firefox's code and discovered 22 vulnerabilities, including 14 severe ones, over two weeks. Most of these bugs have already been fixed in Firefox 148 released in February, though some fixes will come in a later update. The AI was much better at finding security problems than creating working exploits to demonstrate them.

Fix: Most vulnerabilities have been fixed in Firefox 148 (released February). A few remaining fixes will be addressed in the next release.

TechCrunch (Security)
05

GHSA-j8g8-j7fc-43v6: Flowise has Arbitrary File Upload via MIME Spoofing

security
Mar 6, 2026

Flowise has a file upload vulnerability where the server only checks the `Content-Type` header (MIME type spoofing, pretending a file is one type when it's actually another) that users provide, instead of verifying what the file actually contains. Because the upload endpoint is whitelisted (allowed without authentication), an attacker can upload malicious files by claiming they're safe types like PDFs, leading to stored attacks or remote code execution (RCE, where attackers run commands on the server).

GitHub Advisory Database
06

GHSA-wvhq-wp8g-c7vq: Flowise has Authorization Bypass via Spoofed x-request-from Header

security
Mar 6, 2026

Flowise has a critical authorization bypass flaw in its `/api/v1` routes where the middleware trusts any request with the header `x-request-from: internal`, even though this header can be spoofed by any user. This allows a low-privilege authenticated tenant (someone with a valid browser cookie) to call internal administration endpoints, like API key creation and credential management, without proper permission checks, effectively escalating their privileges.

GitHub Advisory Database
07

The Evolution of AI Compliance Assistance from Reactive Support to Co-Agency

policysafety
Mar 6, 2026

A banking group implemented a retrieval-augmented AI-powered compliance assistant (a system where AI pulls in external compliance documents to answer questions) to help with regulatory requirements while maintaining human oversight. The article identifies key challenges with this approach, including authority illusion (over-trusting the AI's answers), unclear responsibility for decisions, loss of human judgment about context, and gaps in understanding how the system works, then proposes a four-phase framework to help organizations move from passive AI assistants toward systems where AI and humans reason together.

AIS eLibrary (Journal of AIS, CAIS, etc.)
08

Anthropic’s Pentagon deal is a cautionary tale for startups chasing federal contracts

policyindustry
Mar 6, 2026

Anthropic and the Pentagon failed to agree on how much control the military should have over Anthropic's AI models, particularly regarding use in autonomous weapons and mass surveillance, causing a $200 million contract to fall apart and leading the Pentagon to designate Anthropic a supply-chain risk (a category indicating potential security or reliability concerns). The Department of Defense then turned to OpenAI instead, which accepted the contract, though this decision led to a significant surge in ChatGPT uninstalls. The situation raises an important question about balancing national security needs with responsible AI deployment.

TechCrunch
09

Claude’s consumer growth surge continues after Pentagon deal debacle

industry
Mar 6, 2026

Claude, an AI chatbot made by Anthropic, is gaining users rapidly on mobile devices after the company's leadership refused to let the Pentagon use it for mass surveillance or autonomous weapons. Claude's daily active users on phones reached 11.3 million in early March, up 183% since the start of the year, and the app became the top-ranked app in the U.S. and 15 other countries, with over 1 million new sign-ups per day.

TechCrunch
10

The Guardian view on AI in war: the Iran conflict shows that the paradigm shift has already begun

policysafety
Mar 6, 2026

The UN and AI companies are debating who should control how artificial intelligence is used in military contexts, especially after the US military's use of AI in the Iran crisis. AI company Anthropic refused to remove safeguards (safety features built into their AI) that would prevent the US Department of Defense from using its technology for mass surveillance or autonomous lethal weapons (weapons that can select and fire at targets without human control), while OpenAI later agreed to work with the Pentagon despite similar concerns. The article emphasizes that decisions about military AI use raise urgent questions about democratic oversight and international controls, rather than leaving these choices solely to companies or governments.

The Guardian Technology
Prev1...5152535455...267Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026