aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,649
[LAST_24H]
0
[LAST_7D]
157
Daily BriefingSaturday, March 28, 2026
>

OpenAI Shuts Down Sora Video App Over Profitability Concerns: OpenAI discontinued its Sora video-generation app and canceled a $1 billion Disney partnership because the service consumed too many computational resources without generating enough revenue to justify costs as the company prioritizes profitability.

>

Critical Injection Vulnerability in localGPT LLM Tool: CVE-2026-5002 allows remote injection attacks (inserting malicious code into input) through the LLM Prompt Handler in PromtEngineer localGPT's backend. The exploit code is publicly available, and the vendor has not responded to disclosure attempts.

>

Latest Intel

page 19/265
VIEW ALL
01

Google Search is now using AI to replace headlines

safety
Mar 20, 2026

Google Search is now using AI to generate its own headlines in search results instead of showing the original headlines from websites. This changes Google's traditional approach of displaying exact content from websites, and in some cases the AI-generated headlines alter the meaning of the original stories.

Critical This Week5 issues
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026

Political Deepfakes Gain Influence Despite Public Awareness: AI researchers found that creators use generative AI (technology that creates images or videos from text descriptions) to produce fake media of political figures for propaganda and profit, and these deepfakes shape public perception even when viewers know the content is fake.

>

TikTok's AI Ad Labels Failing in Practice: Major companies like Samsung are posting AI-generated ads on TikTok without the required disclosure labels, preventing users from identifying whether advertisements were created by AI or humans despite platform policies requiring transparency.

The Verge (AI)
02

Amazon is making an Alexa phone

industry
Mar 20, 2026

Amazon is developing a smartphone codenamed 'Transformer' focused on its Alexa AI assistant, though Alexa won't necessarily be the main operating system. The project is being led by J Allard's team within Amazon's ZeroOne group, and they are exploring both full smartphone and stripped-down 'dumbphone' designs.

The Verge (AI)
03

The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

industry
Mar 20, 2026

This technology news roundup covers OpenAI's plan to build an autonomous AI researcher (a fully automated agent-based system that can solve complex problems independently), with an AI research intern prototype expected by September 2026 and a full multi-agent system by 2028. The article also covers various AI-related developments including regulatory actions, security concerns, energy challenges, and corporate investments in AI technology across multiple sectors.

MIT Technology Review
04

OpenAI is throwing everything into building a fully automated researcher

industryresearch
Mar 20, 2026

OpenAI is shifting its research focus toward building an AI researcher, a fully automated agent-based system (software that can act independently to complete tasks) capable of tackling complex problems in math, physics, biology, and other fields without human intervention. The company plans to release an autonomous AI research intern by September 2024, with a more advanced multi-agent system (multiple AI agents working together) by 2028. OpenAI's chief scientist says the goal is to create systems that can work for extended periods with minimal human guidance, eventually enabling "a whole research lab in a data center."

MIT Technology Review
05

CVE-2026-33081: PinchTab is a standalone HTTP server that gives AI agents direct control over a Chrome browser. Versions 0.8.2 and below

security
Mar 20, 2026

PinchTab is an HTTP server (a program that handles web requests) that lets AI agents control a Chrome web browser. Versions 0.8.2 and earlier have a blind SSRF vulnerability (a flaw where an attacker tricks the server into making requests to internal networks that should be off-limits) in the /download endpoint, because the server only checks the URL once but the browser can follow hidden redirects to reach internal addresses. The risk is limited because the vulnerable feature is disabled by default.

Fix: The issue has been patched in version 0.8.3.

NVD/CVE Database
06

Who's most optimistic about AI — and who isn't, according to Anthropic

industryresearch
Mar 20, 2026

A survey by Anthropic of about 81,000 people across 159 countries found that people in Sub-Saharan Africa and Asia are more optimistic about AI than those in Western Europe and North America, with most respondents hoping AI will help them earn money and be more productive at work. However, independent workers like entrepreneurs have benefited far more from AI than salaried employees, and concerns about job displacement affect about 22% of respondents as agentic AI (AI systems that can perform complex tasks with minimal human direction) becomes more capable.

CNBC Technology
07

The Importance of Behavioral Analytics in AI-Enabled Cyber Attacks

securitysafety
Mar 20, 2026

Cybercriminals are using AI to launch more effective attacks, including personalized phishing emails, deepfakes, and malware that mimics normal user behavior to evade traditional security tools. Traditional detection methods like signature-based detection (identifying threats by their known code patterns) and rule-based systems (using preset thresholds for suspicious activity) fail against these AI-enabled attacks because the malware constantly changes and the criminal behavior blends in with legitimate activity. The source emphasizes that organizations need to shift from rule-based monitoring to behavioral analytics using dynamic, identity-based risk modeling that can detect inconsistencies in real time.

The Hacker News
08

CVE-2026-33075: FastGPT is an AI Agent building platform. In versions 4.14.8.3 and below, the fastgpt-preview-image.yml workflow is vuln

security
Mar 20, 2026

FastGPT (an AI platform for building AI agents) versions 4.14.8.3 and below have a critical security flaw where the fastgpt-preview-image.yml workflow uses pull_request_target (a GitHub feature that runs code with access to repository secrets) but executes code from an external contributor's fork, allowing attackers to run arbitrary code (commands on systems they don't own), steal secrets, and potentially compromise the production container registry (the central storage system for packaged software).

NVD/CVE Database
09

Meta AI agent’s instruction causes large sensitive data leak to employees

securitysafety
Mar 20, 2026

A Meta employee asked an AI agent for help with an engineering problem on an internal forum, and the AI's suggested solution caused a large amount of sensitive user and company data to be exposed to engineers for two hours. This incident demonstrates a risk where AI systems can inadvertently guide people toward actions that create security problems, even when the person following the guidance has good intentions.

The Guardian Technology
10

CVE-2026-32950: SQLBot is an intelligent data query system based on a large language model and RAG. Versions prior to 1.7.0 contain a cr

security
Mar 20, 2026

SQLBot, an intelligent data query system that uses a large language model and RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions), has a critical SQL injection vulnerability (a bug where an attacker tricks the system into running unintended database commands) in versions before 1.7.0 that allows authenticated users to execute arbitrary code on the backend server. The vulnerability exists because Excel sheet names are directly inserted into database commands without proper sanitization (cleaning/validation), and attackers can exploit this by uploading specially crafted files to gain complete control of the system.

Fix: Update to version 1.7.0 or later, where this issue has been fixed.

NVD/CVE Database
Prev1...1718192021...265Next
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026
critical

CISA: New Langflow flaw actively exploited to hijack AI workflows

BleepingComputerMar 26, 2026
Mar 26, 2026
critical

GHSA-mxrg-77hm-89hv: n8n: Prototype Pollution in XML and GSuiteAdmin node parameters lead to RCE

CVE-2026-33696GitHub Advisory DatabaseMar 26, 2026
Mar 26, 2026