aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
1
[LAST_7D]
158
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 15/265
VIEW ALL
01

Palo Alto updates security platform to discover AI agents

securityindustry
Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

Mar 23, 2026

Palo Alto Networks updated its Prisma AIRS security platform to help organizations discover and protect AI agents (independent software programs that perform tasks automatically) across their IT environments, including scanning for vulnerabilities and simulating attacks. As companies rapidly deploy AI agents in business applications, the platform adds new security features like Agent Artifact Security, which maps an agent's structure and finds weaknesses, and AI Red Teaming for Agents, which simulates realistic attacks to identify risks and recommend security policies.

Fix: Prisma AIRS 3.0 provides discovery of AI agents across cloud environments, SaaS platforms, and local endpoints; Agent Artifact Security to scan agent architecture for vulnerabilities; and AI Red Teaming for Agents to simulate context-aware attacks and recommend runtime security policies. Prisma Browser includes the ability to discover user-generated AI activity, enforce content-aware boundaries on agents, prevent sensitive data leakage to unmanaged AI tools, identify and block prompt injection attacks (malicious instructions hidden in website content designed to hijack AI agents), and provide real-time distinction between human and automated AI actions.

CSO Online
02

OpenAI rolls out ChatGPT Library to store your personal files

securityprivacy
Mar 23, 2026

OpenAI has launched a Library feature for ChatGPT that automatically saves files you upload (documents, images, spreadsheets, etc.) to a secure cloud storage location for future reference. The feature is available to ChatGPT Plus, Pro, and Business subscribers worldwide except in the European Economic Area, Switzerland, and the United Kingdom, and files remain saved to your account until you manually delete them.

Fix: To delete files from Library, select the file in the Library tab, click Delete or the trash icon next to the file. OpenAI will remove files from its servers within 30 days of deletion. Note that deleting a chat containing a file does not automatically delete those files saved to Library, so manual deletion from the Library tab is required.

BleepingComputer
03

OpenAI calls out Microsoft reliance as risk in investor document ahead of expected IPO

policyindustry
Mar 23, 2026

OpenAI disclosed in an investor document that its heavy dependence on Microsoft for financing and computing resources poses a business risk, noting that if Microsoft ends their partnership or OpenAI cannot diversify its business partners, the company's operations and finances could suffer. The document also highlighted other risks including massive capital spending requirements, reliance on chip suppliers like Taiwan Semiconductor Manufacturing Company, and potential geopolitical disruptions to the global chip supply chain.

CNBC Technology
04

CVE-2026-30886: New API is a large language mode (LLM) gateway and artificial intelligence (AI) asset management system. Prior to versio

security
Mar 23, 2026

New API, an LLM (large language model) gateway and AI asset management system, had a vulnerability before version 0.11.4-alpha.2 that allowed any logged-in user to view videos belonging to other users through the video proxy endpoint. The problem was an IDOR vulnerability (insecure direct object reference, a flaw where the system doesn't check if a user owns the data they're requesting), caused by a function that checked only the video ID without verifying the user owned it.

Fix: Update to version 0.11.4-alpha.2 or later, which contains a patch addressing this vulnerability.

NVD/CVE Database
05

Faster attacks and ‘recovery denial’ ransomware reshape threat landscape

securityindustry
Mar 23, 2026

A 2026 Mandiant security report shows that attackers are operating faster and more collaboratively, with hand-offs between threat groups now happening in 22 seconds instead of 8+ hours. Attackers are shifting tactics away from email phishing (6% of attacks) toward voice phishing (11%) and other interactive social engineering, while increasingly targeting recovery systems through 'recovery denial' ransomware to prevent organizations from restoring after breaches.

CSO Online
06

Varonis Atlas: Securing AI and the Data That Powers It

securityindustry
Mar 23, 2026

Varonis Atlas is an AI security platform that helps organizations discover, monitor, and protect AI systems across their enterprise, from custom AI models to chatbots and AI agents. The platform addresses a major security gap: most organizations don't know which AI systems they have, what data those systems can access, or whether they're compliant with regulations, creating risks since AI agents can read and modify data at machine speed. Atlas covers the entire AI security lifecycle through features like continuous AI discovery, posture management (vulnerability and misconfiguration assessment), runtime protection, and compliance reporting.

BleepingComputer
07

Confronting the CEO of the AI company that impersonated me

safetyprivacy
Mar 23, 2026

Grammarly (now part of Superhuman) launched a feature called Expert Review in August that used AI to create cloned versions of real journalists and writers, including the interviewer, without their permission to provide writing suggestions. The company faced backlash and legal action, ultimately killing the feature entirely and offering an opt-out option.

Fix: Superhuman responded by first offering an email-based opt out and then killing the feature entirely.

The Verge (AI)
08

You Built the Brain. Now Protect It.

securityindustry
Mar 23, 2026

As companies convert traditional data centers into AI factories (facilities that produce and run large language models, or LLMs) to generate revenue and gain competitive advantages, they face new security risks. Check Point has created a blueprint architecture (a detailed security design plan) to help enterprises protect these AI data centers as the market grows significantly from $236 billion in 2025 to $934 billion by 2030.

Check Point Research
09

Check Point at RSAC – How We’re Helping Our Customers Secure their AI Transformation

securitypolicy
Mar 23, 2026

Companies are quickly adopting AI tools to improve productivity and gain business advantages, but this creates new security risks. AI tools often access sensitive company data like customer records and emails, and employees may use LLMs (large language models, AI systems trained on huge amounts of text) without approval, risking accidental leaks of confidential information.

Check Point Research
10

The Download: animal welfare gets AGI-pilled, and the White House unveils its AI policy

policyindustry
Mar 23, 2026

This newsletter covers multiple AI-related developments, including animal welfare advocates exploring how artificial general intelligence (AGI, a theoretical AI system that can learn and perform any intellectual task) might reduce animal suffering, the White House unveiling a light-touch AI regulation framework, and various corporate moves like OpenAI adding ads to free ChatGPT and the Pentagon adopting Palantir's AI for military targeting. The article also discusses Elon Musk being found liable for misleading Twitter investors and a case where an Australian woman's experimental brain implant was removed against her wishes despite significantly improving her quality of life.

MIT Technology Review
Prev1...1314151617...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026