aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
27
[LAST_7D]
167
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 75/269
VIEW ALL
01

We don’t have to have unsupervised killer robots

policysafety
Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

Feb 27, 2026

The Pentagon is pressuring Anthropic (an AI company) to remove safety restrictions on its technology or face being labeled a 'supply chain risk' that could cost it billions in contracts. The pressure includes demands for military access to the AI for surveillance and autonomous weapons systems, raising concerns among tech workers about how their work might be used.

The Verge (AI)
02

In Defense-Anthropic clash, AI is real-time testing the balance of power in future of warfare

policyindustry
Feb 27, 2026

The U.S. Department of Defense is in a standoff with Anthropic, an AI company, over whether the company will remove safeguards from its AI models to allow military uses like mass domestic surveillance and fully autonomous weapons (systems that can make combat decisions without human control). This conflict highlights a major shift in power: private companies now control cutting-edge AI technology rather than governments, forcing the Pentagon to negotiate with industry over how AI will be deployed in national security and warfare.

CNBC Technology
03

OpenAI announces $110 billion funding round with backing from Amazon, Nvidia, SoftBank

industry
Feb 27, 2026

OpenAI announced a $110 billion funding round led by Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), raising the company's valuation to $730 billion. Beyond the investment, Amazon committed to an expanded $100 billion partnership over eight years to use AWS (Amazon Web Services, Amazon's cloud computing platform) as OpenAI's exclusive cloud provider and to develop customized AI models for Amazon's applications.

CNBC Technology
04

In Other News: ATT&CK Advisory Council, Russian Cyberattacks Aid Missile Strikes, Predator Bypasses iOS Indicators

securityindustry
Feb 27, 2026

This article briefly mentions several cyber security developments, including OpenAI taking action against malicious uses of AI, a hacker group claiming to have breached Odido (a telecommunications company), and a spyware tool called Predator that can bypass iOS security indicators (the visual signals that show when an app is accessing your device's features).

SecurityWeek
05

OpenAI snags $110 billion in investments from Amazon, Nvidia, and Softbank

industry
Feb 27, 2026

OpenAI has secured $110 billion in new funding from Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), bringing the company's valuation to $730 billion. The investment includes plans for custom AI models and reflects confidence in OpenAI's ChatGPT platform, which has over 900 million weekly active users and 50 million consumer subscribers.

The Verge (AI)
06

Anthropic faces lose-lose scenario in Pentagon conflict as deadline for policy change looms

policy
Feb 27, 2026

Anthropic, an AI startup, faces a Friday deadline to allow the U.S. Department of Defense to use its AI models without restrictions, or face severe penalties like being labeled a 'supply chain risk' (a designation that blocks government contractors from using the company's technology). The company has refused, saying it won't agree to uses it believes could undermine democracy, such as fully autonomous weapons or domestic mass surveillance, putting it in conflict between maintaining its reputation for responsible AI and losing significant military contracts and revenue.

CNBC Technology
07

OpenAI raises $110B in one of the largest private funding rounds in history

industry
Feb 27, 2026

OpenAI has secured $110 billion in private funding from major investors including Amazon ($50 billion), Nvidia ($30 billion), and SoftBank ($30 billion), making it one of the largest private funding rounds ever. The company plans to use this capital to scale its AI infrastructure globally, including building new runtime environments on Amazon's cloud services and committing to use significant computing power from both Amazon and Nvidia. This funding round reflects OpenAI's goal to move frontier AI (advanced AI systems at the cutting edge of research) from research phase into widespread daily use across the world.

TechCrunch
08

Claude Code Security Shows Promise, Not Perfection

securityresearch
Feb 27, 2026

Claude Code, an AI tool for writing software, generated excitement when it was released, but researchers studying it have found that its actual performance and security capabilities are not as impressive as initial claims suggested. The article indicates that people were too optimistic about what the tool could do.

Dark Reading
09

Netflix drops its WBD bid, Block layoffs, Anthropic's DOD deadline and more in Morning Squawk

industrypolicy
Feb 27, 2026

Anthropic, an AI startup, is refusing to let the U.S. Defense Department use its AI models without restrictions on fully autonomous weapons (weapons that make decisions without human control) and mass domestic surveillance. The Pentagon wants unlimited use of Anthropic's models and set a deadline for the company to agree, threatening to label them a supply chain risk (a company whose failure could disrupt critical systems) if they don't comply.

CNBC Technology
10

Anthropic Refuses to Bend to Pentagon on AI Safeguards as Dispute Nears Deadline

policysafety
Feb 27, 2026

Anthropic, an AI company, is in a dispute with the Pentagon over safeguards for its Claude AI system. The company is asking for specific guarantees that Claude won't be used for mass surveillance (monitoring large populations without consent) of Americans or in fully autonomous weapons (military systems that make lethal decisions without human control).

SecurityWeek
Prev1...7374757677...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026