aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
1
[LAST_7D]
158
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 17/265
VIEW ALL
01

AI was everywhere at gaming’s big developer conference — except the games

industry
Mar 22, 2026

At the Game Developers Conference, AI tools were heavily promoted for creating game content, NPCs (non-player characters, the computer-controlled characters in games), and automating quality assurance tasks, but these AI systems were largely absent from actual commercial games being released. The gap between AI hype in the gaming industry and its real-world implementation in finished games remains significant.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

The Verge (AI)
02

CVE-2026-4538: A vulnerability was identified in PyTorch 2.10.0. The affected element is an unknown function of the component pt2 Loadi

security
Mar 22, 2026

PyTorch 2.10.0 contains a vulnerability in its pt2 Loading Handler component that allows unsafe deserialization (loading data in a way that can execute unintended code) through an unknown function. The vulnerability can only be exploited locally (by someone with access to the affected computer), but an exploit is publicly available, and the PyTorch team has not yet responded to the initial report.

NVD/CVE Database
03

CVE-2026-4530: A security flaw has been discovered in apconw Aix-DB up to 1.2.3. This impacts an unknown function of the file agent/tex

security
Mar 21, 2026

A SQL injection vulnerability (CVE-2026-4530) has been found in apconw Aix-DB up to version 1.2.3, where an attacker can manipulate the Description argument in the file agent/text2sql/rag/terminology_retriever.py to execute unauthorized SQL commands (SQL injection, a type of attack where an attacker inserts malicious database commands into input fields). The attack requires local access, the exploit is public, and the vendor has not responded to the disclosure.

NVD/CVE Database
04

How the FBI can conduct mass surveillance – even without AI

policyprivacy
Mar 21, 2026

Anthropic has refused to let the U.S. Department of Defense use its AI technology for mass surveillance (monitoring large groups of people without individual suspicion), but FBI Director Kash Patel revealed that authorities can already conduct large-scale surveillance of Americans by purchasing data directly from private companies, bypassing the need for AI firms' cooperation.

The Guardian Technology
05

The gen AI Kool-Aid tastes like eugenics

safety
Mar 21, 2026

Director Valerie Veatch explored OpenAI's Sora text-to-video generative AI model (software that creates videos from text descriptions) in 2024, hoping to connect with other artists in online communities. However, she discovered that the AI frequently generated images containing racism and sexism, and was disturbed that other AI enthusiasts seemed unconcerned about these biased outputs.

The Verge (AI)
06

OpenClaw's ChatGPT moment sparks concern that AI models are becoming commodities

industrysafety
Mar 21, 2026

OpenClaw, an open-source AI assistant project, has become extremely popular and is enabling developers to build and run AI agents locally on personal computers rather than relying on expensive cloud services from major AI companies. This rapid growth has sparked concern that advanced AI models are becoming commodities, with the same capabilities now available cheaply through open-source alternatives instead of only through expensive proprietary services from companies like OpenAI and Anthropic.

CNBC Technology
07

Gemini task automation is slow, clunky, and super impressive

industry
Mar 21, 2026

Google has launched Gemini task automation, a feature that lets an AI assistant use apps on your phone to complete tasks for you, currently available on Pixel 10 Pro and Galaxy S26 Ultra phones in beta. The feature works with a limited number of services like food delivery and rideshare apps, and while it's slow and sometimes clunky, it represents an early example of an AI actually performing actions on a device rather than just answering questions.

The Verge (AI)
08

Who’s Really Shopping? Retail Fraud in the Age of Agentic AI

securitysafety
Mar 20, 2026

Agentic AI (AI systems that can independently take actions) is expected to handle 15-25% of e-commerce by 2030, but this growth creates security risks for retailers. Threat actors may exploit AI agents to commit fraud such as gift card theft and returns fraud, with estimates suggesting one in four data breaches by 2028 could involve AI agent exploitation. Google has introduced the Universal Commerce Protocol (UCP), an open standard designed to enable secure payments between AI agents and retail systems, though the article emphasizes that defending against AI-enabled fraud remains a critical challenge for organizations.

Palo Alto Unit 42
09

ChatGPT's ad pilot has the industry excited, but some insiders are frustrated with the slow rollout

industry
Mar 20, 2026

OpenAI is running a limited test of ads on ChatGPT with major ad agencies, but the rollout is slower than partners expected, frustrating them since they committed large budgets ($200,000-$250,000 each) that may not be fully spent by the March deadline. OpenAI says the slow pace is intentional to learn from users before expanding broadly, and recent data shows ad delivery is accelerating with a 600% increase in ads served by mid-March.

CNBC Technology
10

GHSA-ph9w-r52h-28p7: langflow: /profile_pictures/{folder_name}/{file_name} endpoint file reading

security
Mar 20, 2026

Langflow's /profile_pictures/{folder_name}/{file_name} endpoint has a path traversal vulnerability (a flaw where attackers use ../ sequences to access files outside the intended directory). The folder_name and file_name parameters aren't properly validated, allowing attackers to read the secret_key file across directories. Since the secret_key is used for JWT authentication (a token system that verifies who you are), an attacker can forge login tokens and gain unauthorized access to the system.

GitHub Advisory Database
Prev1...1516171819...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026