aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
28
[LAST_7D]
167
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 74/269
VIEW ALL
01

Sam Altman backs rival Anthropic in fight with Pentagon

policyindustry
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

Feb 27, 2026

OpenAI CEO Sam Altman publicly supported rival company Anthropic in its dispute with the US Department of Defense over AI tool usage, stating that OpenAI shares Anthropic's refusal to allow certain uses like domestic surveillance and autonomous offensive weapons. The Pentagon has threatened Anthropic with retaliation, including invoking the Defense Production Act (a law letting the government use a company's products as it sees fit) or labeling the company a supply chain risk, but Anthropic maintains its position on restricting potentially harmful applications.

BBC Technology
02

Sam Altman aims to 'help de-escalate' tensions with Pentagon as OpenAI employees voice support for Anthropic

policyindustry
Feb 27, 2026

OpenAI CEO Sam Altman sent an internal memo to staff expressing support for rival company Anthropic in a dispute with the Pentagon over AI model usage, stating that both companies oppose using AI for mass surveillance or fully autonomous weapons. About 70 OpenAI employees signed an open letter supporting Anthropic, which has a deadline to decide whether to allow the Department of Defense unrestricted access to its AI models. Altman indicated OpenAI is negotiating with the Pentagon to deploy its own models in classified environments while maintaining ethical boundaries around domestic surveillance and autonomous offensive weapons.

Fix: Altman proposed that OpenAI would ask for a contract with the Pentagon that covers "any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons." He also stated the company would "build technical safeguards and deploy personnel to ensure things are working correctly" in classified environments.

CNBC Technology
03

Nvidia's stock wrapping up tough week as Wall Street focuses more on competition than growth

industry
Feb 27, 2026

Despite strong earnings and growth forecasts, Nvidia's stock fell 6% this week as investors worry that spending by tech companies on AI infrastructure will peak soon and competition is increasing. Major AI companies like OpenAI and Meta are now diversifying away from Nvidia's GPUs (graphics processing units, specialized chips for AI computations) by adopting alternative chips from companies like Amazon, Google, and Advanced Micro Devices.

CNBC Technology
04

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

safetypolicy
Feb 27, 2026

In a deposition for his lawsuit against OpenAI, Elon Musk claimed that his company xAI prioritizes AI safety better than OpenAI, and that ChatGPT has caused mental health harms including suicides while Grok has not. Musk's lawsuit challenges OpenAI's transition from a nonprofit to a for-profit company, arguing that commercial interests compromise safety priorities, though xAI itself has faced safety issues including the generation of non-consensual intimate images by Grok.

TechCrunch
05

Anthropic vs. the Pentagon: What’s actually at stake?

policysafety
Feb 27, 2026

Anthropic and the U.S. Department of Defense are in conflict over how the military can use Anthropic's AI models. Anthropic refuses to allow its AI for mass surveillance of Americans or fully autonomous weapons (systems that select and fire at targets without human decision-makers), while the Pentagon argues it should be permitted to use the technology for any lawful purpose. The core dispute is whether the companies that build powerful AI systems or the government that deploys them should control how those systems are used.

TechCrunch
06

ChatGPT reaches 900M weekly active users

industry
Feb 27, 2026

ChatGPT has reached 900 million weekly active users and 50 million paying subscribers, with OpenAI reporting that subscriber growth accelerated significantly in early 2026. The company announced a $110 billion funding round, one of the largest private funding rounds ever, with major investments from Amazon, Nvidia, and SoftBank at a $730 billion valuation.

TechCrunch
07

Free Claude Max for (large project) open source maintainers

industry
Feb 27, 2026

Anthropic is offering free access to Claude Max (their $200/month AI assistant plan) for six months to open source maintainers who meet specific criteria: primary maintainers of public repositories with 5,000+ GitHub stars or 1 million+ monthly NPM downloads, with recent commits or reviews in the last three months. The program accepts up to 10,000 contributors, and maintainers who don't quite meet the stated criteria can still apply and explain their importance to the ecosystem.

Simon Willison's Weblog
08

AI vs. the Pentagon: killer robots, mass surveillance, and red lines

policysafety
Feb 27, 2026

Anthropic is refusing to accept new Pentagon contract terms that would remove safety restrictions (guardrails, the built-in limits on what an AI model will do) from its AI models, which would allow uses like mass surveillance of Americans and fully autonomous lethal weapons (weapons that can select and fire at targets without human control). Despite pressure from the Pentagon, including threats to label Anthropic a supply chain risk (a designation suggesting it poses a national security threat), CEO Dario Amodei says the company will not compromise on these ethical boundaries, while competitors OpenAI and xAI have reportedly agreed to the terms.

The Verge (AI)
09

Perplexity’s new Computer is another bet that users need many AI models

industry
Feb 27, 2026

Perplexity has launched Computer, an agentic tool (software that can independently execute complex tasks) that combines 19 different AI models to handle workflows like data collection, analysis, and report creation. The tool runs in the cloud and is available only to subscribers of Perplexity Max (the $200/month tier), though a planned demo was canceled hours before a press event due to flaws discovered in the product.

TechCrunch
10

Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter

policy
Feb 27, 2026

Anthropic, an AI company, is refusing the Pentagon's demands for unrestricted access to its AI technology, specifically opposing its use for domestic mass surveillance (tracking citizens without limits) and fully autonomous weapons (weapons that make kill decisions without human control). Over 300 Google employees and 60 OpenAI employees signed an open letter supporting Anthropic's stance, and leaders at both companies have informally expressed sympathy for Anthropic's position, though the Pentagon has threatened to declare Anthropic a security risk or use the Defense Production Act (a law allowing the government to force companies to produce needed goods) if it doesn't comply.

TechCrunch
Prev1...7273747576...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026