aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
18
[LAST_7D]
164
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 79/269
VIEW ALL
01

Claude Code Flaws Exposed Developer Devices to Silent Hacking

security
Feb 26, 2026

Anthropic discovered and fixed security vulnerabilities in Claude (an AI assistant) that could allow attackers to silently compromise developer computers through specially crafted configuration files. Security researchers at Check Point showed how these flaws could be exploited in real-world attacks.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

SecurityWeek
02

Hacker kompromittieren immer schneller

securityindustry
Feb 26, 2026

Hackers are compromising networks much faster in 2025, taking an average of only 29 minutes to gain full access compared to 83 minutes in 2024, with the fastest recorded time being just 27 seconds. The main reason for this acceleration is the increased use of AI tools by attackers, particularly state-sponsored and criminal groups who have boosted their activity by 89 percent, with examples including LLM-based malware (AI models trained on large amounts of text data) for automating information gathering and AI-generated scripts for extracting credentials and covering their tracks.

CSO Online
03

LLMs Generate Predictable Passwords

safetysecurity
Feb 26, 2026

Large language models (LLMs, AI systems trained on text data) are very bad at generating passwords because they create predictable patterns instead of truly random ones. The study found that Claude, an LLM, always started passwords with an uppercase G followed by 7, avoided repeating characters, never used the * symbol, and repeated the same password 36% of the time across 50 attempts. This is a serious problem because autonomous AI agents (AI systems that act without human control) will need to create accounts and authenticate themselves, but the passwords they generate are weak and easy to crack.

Schneier on Security
04

5 trends that should top CISO’s RSA 2026 agendas

securityindustry
Feb 26, 2026

RSA 2026 will focus on five cybersecurity trends, including AI-SOCs (security operations centers using autonomous agents to handle alert triage and incident response), CTEM (continuous threat exposure management, which gives organizations a complete view of their assets and vulnerabilities to prioritize risk), and cyber resilience (the ability to anticipate, withstand, recover from, and adapt to attacks). Security leaders should approach these trends with cautious skepticism, asking tough questions about vendor claims and ensuring strong data foundations before adopting new tools.

CSO Online
05

Google API Keys Weren't Secrets. But then Gemini Changed the Rules.

security
Feb 25, 2026

Google API keys that were originally created as public identifiers for Google Maps became dangerous security risks when Google enabled the Gemini API on the same projects, because Gemini keys can access private files and make billable requests, yet developers were never notified of this privilege change. Truffle Security discovered nearly 3,000 exposed API keys in web archives that could access Gemini, including some belonging to Google itself, highlighting how a service upgrade unexpectedly transformed harmless public keys into secret credentials.

Fix: Google is working to revoke affected keys. Additionally, Google recommends checking your own API keys to verify none of yours are affected by this issue.

Simon Willison's Weblog
06

Nvidia’s Jensen Huang says markets ‘got it wrong’ on AI threat to software companies

industry
Feb 25, 2026

Nvidia CEO Jensen Huang argued that markets are wrong to fear AI agents will destroy software companies, saying instead that AI agents are 'tool users' that will rely on existing enterprise software tools like Excel, ServiceNow, and SAP to become more productive. Huang's comments came after Nvidia reported strong earnings and raised its revenue forecast, though some analysts warn that certain software companies could still face serious challenges as AI automates workflows and lowers barriers for new competitors.

CNBC Technology
07

Nvidia’s Huang says any Pentagon–Anthropic rift is 'not the end of the world'

policy
Feb 25, 2026

Nvidia CEO Jensen Huang downplayed concerns about a dispute between the U.S. Defense Department and Anthropic, a company that makes Claude (a large language model, or LLM). The disagreement centers on whether Anthropic's AI tools can be used for autonomous weapons (weapons that make decisions without human control) and mass surveillance, with the Defense Department demanding unrestricted use while Anthropic seeks limitations.

CNBC Technology
08

CVE-2026-27966: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.8.0, the CSV Agent nod

security
Feb 25, 2026

Langflow, a tool for building AI-powered agents and workflows, had a vulnerability in versions before 1.8.0 where the CSV Agent node automatically enabled a dangerous Python execution feature. This allowed attackers to run arbitrary Python and operating system commands on the server through prompt injection (tricking the AI by hiding instructions in its input), resulting in RCE (remote code execution, where an attacker can run commands on a system they don't own).

Fix: Version 1.8.0 fixes the issue.

NVD/CVE Database
09

Gushwork bets on AI search for customer leads — and early results are emerging

industry
Feb 25, 2026

Gushwork, an India-founded startup, is helping businesses get discovered through AI-powered search tools (systems like ChatGPT and Perplexity that use artificial intelligence to answer questions) by automatically creating search-optimized content and building backlinks (links from other websites that point to a business's site). The company raised $9 million in funding and reports that AI-driven search and chat platforms now account for about 40% of inbound leads for its customers, despite representing only 20% of website traffic.

TechCrunch
10

Chinese Police Use ChatGPT to Smear Japan PM Takaichi

security
Feb 25, 2026

A Chinese internet activist accidentally exposed details about coordinated political influence operations (organized campaigns to manipulate public opinion) that used ChatGPT to create negative content about Japan's Prime Minister Takaichi. The leak revealed how ChatGPT was being used as a tool to generate misleading material for political purposes.

Dark Reading
Prev1...7778798081...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026