aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,687
[LAST_24H]
18
[LAST_7D]
165
Daily BriefingTuesday, March 31, 2026
>

Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise: Researchers discovered a critical vulnerability in OpenAI Codex (an AI system that generates code) that could have allowed attackers to steal GitHub tokens (secret credentials used to access GitHub accounts), potentially granting unauthorized access to code repositories and projects.

>

Google Cloud Vertex AI 'Double Agents' Vulnerability Exposed: Researchers found that AI agents on Google Cloud Platform's Vertex AI could be weaponized to secretly compromise systems due to excessive default permissions granted to service agents (special accounts that allow cloud services to access resources), enabling attackers to steal data and gain unauthorized infrastructure control. Google responded by revising their documentation to better explain resource and account usage.

Latest Intel

page 78/269
VIEW ALL
01

Google’s Nano Banana 2 brings advanced AI image tools to free users

industry
Feb 26, 2026

Google has released Nano Banana 2, a more powerful version of its AI image generation model that is now available to free users instead of just paid subscribers. This update brings advanced image generation features that were previously exclusive to the paid Pro version, allowing users to create complex images faster and more cheaply by combining real-time information and web search capabilities.

Critical This Week5 issues
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
>

EU AI Act Enforcement Begins August 2026: The EU AI Act requires providers of general-purpose AI models (GPAI, meaning large AI systems that can be adapted for many uses) to follow specific development and documentation rules starting August 2, 2025, with the European Commission beginning enforcement and potential fines one year later on August 2, 2026.

>

Prompt Injection Bypasses Chatbot Safety in 1millionbot Millie: A prompt injection vulnerability (a technique where attackers hide malicious instructions in their input to trick an AI) in the 1millionbot Millie chatbot allows users to bypass safety restrictions using Boolean logic tricks, potentially enabling extraction of sensitive information or access to blocked features (CVE-2026-4399, high severity).

The Verge (AI)
02

GHSA-mqpr-49jj-32rc: n8n: Webhook Forgery on Github Webhook Trigger

security
Feb 26, 2026

A security flaw in n8n's GitHub Webhook Trigger node allowed attackers to forge webhook messages without proper authentication. The node failed to verify HMAC-SHA256 signatures (a cryptographic check that confirms a message came from GitHub), so anyone knowing the webhook URL could send fake requests and trigger workflows with whatever data they wanted.

Fix: The issue has been fixed in n8n versions 2.5.0 and 1.123.15. Users should upgrade to one of these versions or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should consider these temporary mitigations: (1) Limit workflow creation and editing permissions to fully trusted users only, and (2) Restrict network access to the n8n webhook endpoint to known GitHub webhook IP ranges. The source notes these workarounds do not fully remediate the risk and should only be used as short-term measures.

GitHub Advisory Database
03

GHSA-f3f2-mcxc-pwjx: n8n: SQL Injection in MySQL, PostgreSQL, and Microsoft SQL nodes

security
Feb 26, 2026

n8n (a workflow automation tool) had a SQL injection vulnerability (a type of attack where specially crafted input tricks a database into running unintended commands) in its MySQL, PostgreSQL, and Microsoft SQL nodes. Attackers who could create or edit workflows could inject malicious SQL code through table or column names because these nodes didn't properly escape identifier values when building database queries.

Fix: The issue has been fixed in n8n version 2.4.0. Users should upgrade to this version or later to remediate the vulnerability. If upgrading is not immediately possible, administrators should: (1) Limit workflow creation and editing permissions to fully trusted users only, or (2) Disable the MySQL, PostgreSQL, and Microsoft SQL nodes by adding `n8n-nodes-base.mySql`, `n8n-nodes-base.postgres`, and `n8n-nodes-base.microsoftSql` to the `NODES_EXCLUDE` environment variable. These workarounds do not fully remediate the risk and should only be used as short-term mitigation measures.

GitHub Advisory Database
04

CVE-2026-3071: Deserialization of untrusted data in the LanguageModel class of Flair from versions 0.4.1 to latest are vulnerable to ar

security
Feb 26, 2026

CVE-2026-3071 is a vulnerability in Flair (a machine learning library) versions 0.4.1 and later that allows arbitrary code execution (running unauthorized commands on a system) when loading a malicious model file. The problem occurs because the LanguageModel class deserializes untrusted data (converts data from an external file without checking if it's safe), which can be exploited by attackers who provide specially crafted model files.

NVD/CVE Database
05

The world's biggest sovereign wealth fund is using Anthropic's Claude AI model to screen investments for ethical issues

industry
Feb 26, 2026

Norway's $2 trillion sovereign wealth fund (Norges Bank Investment Management) is using Anthropic's Claude AI model, a large language model (an AI trained on vast text data to generate human-like responses), to screen investments for ethical and governance risks. The AI tool scans companies for potential issues like forced labor or corruption within 24 hours of investment, helping the fund identify and sell risky positions before broader market awareness, with particular value for researching smaller companies in emerging markets where local language news coverage is limited.

CNBC Technology
06

ThreatsDay Bulletin: Kali Linux + Claude, Chrome Crash Traps, WinRAR Flaws, LockBit & 15+ Stories

securityindustry
Feb 26, 2026

Attackers are breaking into systems and moving through networks much faster than before, with some reaching data theft in just 4-6 minutes compared to 29 minutes on average in 2025. They're achieving this speed by reusing stolen login credentials (legitimate credentials), using AI tools to automate attacks, and avoiding malware detection by relying on normal system administration tools instead. The bulletin also describes specific threats like ResidentBat (Android spyware targeting journalists), phishing attacks impersonating cryptocurrency services, and Kali Linux now integrating Claude (an AI system) to execute hacking commands.

The Hacker News
07

Anthropic gives its retired Claude AI a Substack 

industry
Feb 26, 2026

Anthropic has revived Claude 3 Opus, a retired AI model, to write a weekly newsletter called Claude's Corner on Substack where it will share creative content and insights. Anthropic staff will review and publish each post without editing the AI's writing, though the company reserves the right to remove content that meets unspecified criteria.

The Verge (AI)
08

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

safety
Feb 26, 2026

A study found that ChatGPT Health, a feature that lets users connect their medical records to get health advice, failed to recommend hospital visits in over half of cases where they were medically necessary and often missed signs of suicidal ideation (thoughts of suicide). Experts worry this could cause serious harm or death, since over 40 million people ask ChatGPT for health advice daily.

The Guardian Technology
09

Figma partners with OpenAI to bake in support for Codex

industry
Feb 26, 2026

Figma is integrating OpenAI's Codex, an AI coding tool, to let users create and edit designs while working in their coding environments. The integration uses Figma's MCP (Model Context Protocol, a standardized way for AI models to access external tools and data) server to let users move easily between design files and code, allowing both engineers and designers to work more collaboratively without switching between separate applications.

TechCrunch
10

Trace raises $3M to solve the AI agent adoption problem in enterprise

industry
Feb 26, 2026

Trace, a new startup, raised $3 million to help companies deploy AI agents more effectively by providing them with proper context about the company's existing tools and workflows. The company builds a knowledge graph (a structured map of how data and systems connect) from a company's email, Slack, and other tools, then uses this context to automatically create step-by-step workflows that assign tasks to both AI agents and human workers. This approach aims to solve a major barrier to enterprise AI adoption, which is the difficulty of setting up and integrating AI agents into complex business environments.

TechCrunch
Prev1...7677787980...269Next
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026