aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSaturday, May 16, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 112/371
VIEW ALL
01

GHSA-7q9x-8g6p-3x75: @grackle-ai/server: Unescaped Error String in renderPairingPage() HTML Template

security
Mar 25, 2026

A function called `renderPairingPage()` in the @grackle-ai/server library embeds error messages directly into HTML without escaping (a process that makes text safe for display in web pages). While current uses pass only hardcoded strings and are not exploitable now, future code changes that pass user-controlled input could create an XSS vulnerability (a type of attack where malicious code is injected into a webpage).

Fix: Update to v0.70.1. The fix applies `escapeHtml()` to the error parameter by changing `${error}` to `${escapeHtml(error)}` in the HTML template string, matching the safer approach already used in the `renderAuthorizePage()` function in the same file.

GitHub Advisory Database
02

GHSA-xvh5-5qg4-x9qp: n8n has In-Process Memory Disclosure in its Task Runner

security
Mar 25, 2026

n8n (a workflow automation tool) has a security flaw where authenticated users who can create or modify workflows could access uninitialized memory buffers (chunks of computer memory that haven't been cleared), potentially exposing sensitive data like secrets or tokens from previous requests in the same process. This vulnerability only affects systems where Task Runners are enabled and can be limited in external runner mode (where the runner operates in a separate, isolated process).

Fix: The issue has been fixed in n8n versions >= 1.123.22, >= 2.10.1, and >= 2.9.3. Users should upgrade to one of these versions or later. If upgrading is not immediately possible, administrators can temporarily limit workflow creation and editing permissions to fully trusted users only, or use external runner mode by setting `N8N_RUNNERS_MODE=external`. The source notes these workarounds do not fully remediate the risk and should only be short-term measures.

GitHub Advisory Database
03

PadNet: Defending Neural Networks Against Adversarial Examples

securityresearch
Mar 25, 2026

PadNet is a defense method designed to protect neural networks (AI models that learn patterns from data) against adversarial examples (specially crafted inputs that trick AI systems into making wrong predictions). The paper, published in an academic journal, presents techniques to make these AI systems more robust when facing such attacks.

ACM Digital Library (TOPS, DTRAP, CSUR)
04

Senate Democrats are trying to ‘codify’ Anthropic’s red lines on autonomous weapons and mass surveillance

policy
Mar 25, 2026

Anthropic, an AI company, restricted how the military could use its AI models, leading the Trump administration to blacklist it as a supply-chain risk (a potential weak point in defense systems). Now, Democratic senators are proposing bills to legally enforce these restrictions, including requirements that humans make final decisions about life-and-death situations and limits on using AI for mass surveillance (automated monitoring of large populations) of Americans.

The Verge (AI)
05

Mark Zuckerberg and Jensen Huang are part of Trump’s new ‘tech panel’

policy
Mar 25, 2026

Mark Zuckerberg, Larry Ellison, Jensen Huang, and Sergey Brin have been named to the President's Council of Advisors on Science and Technology (PCAST), a new advisory panel that will provide input on AI policy and other technology matters to the U.S. President. The panel will start with 13 members but could expand to 24, and will be co-chaired by David Sacks and Michael Kratsios.

The Verge (AI)
06

GHSA-5mg7-485q-xm76: Two LiteLLM versions published containing credential harvesting malware

security
Mar 25, 2026

Two versions of LiteLLM (a Python library for working with multiple AI models), versions 1.82.7 and 1.82.8, were published with malware that steals user credentials (usernames, passwords, and authentication tokens). This is a critical security issue because anyone who installed these specific versions could have their sensitive login information compromised.

GitHub Advisory Database
07

Privacy-Preserving Multi-Modal Object Fusion for Connected Autonomous Vehicles: Resilience Against Malicious Third-Party Attacks

securityresearch
Mar 25, 2026

Connected autonomous vehicles (CAVs) use multiple types of sensors, like LiDAR (light-based radar that creates 3D maps) and cameras, to understand their surroundings, and combining information from both sensors improves accuracy. However, this sensor fusion process can leak private information and relies on a third party to generate random numbers, which could be compromised by attackers. Researchers propose MPOF, a model that uses secure computation protocols (mathematical methods that let systems calculate results without exposing raw data) and sacrificial verification (a technique that detects when a third party behaves maliciously) to protect privacy while defending against attacks from that third party.

Fix: The source proposes the MPOF model with secure computation protocols that include sacrificial verification to detect malicious third-party behavior during random number generation. The paper states the protocols 'reduce computational overhead by five orders of magnitude' compared to methods using homomorphic encryption (encryption that allows calculations on encrypted data without decrypting it first), making the approach more practical for resource-constrained vehicles.

IEEE Xplore (Security & AI Journals)
08

Filter, Obstruct, and Dilute: Defending Against Backdoor Attacks on Semi-Supervised Learning

securityresearch
Mar 25, 2026

Semi-supervised learning (SSL, a training method where models learn from both labeled and unlabeled data) is vulnerable to backdoor attacks, where attackers can corrupt model predictions by poisoning a small portion of training data with hidden triggers. This paper reveals that SSL backdoor attacks are particularly dangerous because they exploit the pseudo-labeling mechanism (the process where the model assigns labels to unlabeled data) to create stronger trigger-target correlations than in supervised learning. The researchers propose Backdoor Invalidator (BI), a defense framework using complementary learning, trigger mix-up, and dual domain filtering to obstruct and filter backdoor influences during both feature learning and data processing.

Fix: The source presents Backdoor Invalidator (BI) as an explicit defense framework. According to the text, BI 'integrates three novel techniques: complementary learning, trigger mix-up, and dual domain filtering, which collectively obstruct, dilute, and filter the influence of backdoor attacks in both feature learning and data processing.' The framework is designed to 'significantly reduce the average attack success rate while maintaining comparable accuracy on clean data' and is described as 'practical deployable as a plug-in component.' Code implementing this defense is available at https://github.com/wxr99/Backdoor_Invalidator4SSL.

IEEE Xplore (Security & AI Journals)
09

Assessing and Improving DNN Robustness Against Adversarial Examples From the Perspective of Fully Connected Layers

researchsecurity
Mar 25, 2026

Deep neural networks (machine learning models with many layers that process information) are vulnerable to adversarial examples, which are inputs slightly modified to fool the AI into making wrong predictions. This paper proposes adding a redundant fully connected layer (a type of neural network component that connects all inputs to all outputs) with a special loss function to make these networks more robust against attacks while maintaining accuracy on normal inputs.

Fix: The source describes a defense mechanism but does not present it as a deployed fix or patch. It is a research proposal for a novel component (redundant fully connected layer with a cosine similarity-based loss function) that can be added to existing models. N/A -- no mitigation discussed in source.

IEEE Xplore (Security & AI Journals)
10

Propose and Rectify: A Forensics-Driven MLLM Framework for Image Manipulation Localization

research
Mar 25, 2026

This research presents a new framework called Propose-Rectify that helps detect and locate image manipulations (alterations made to photos) by combining two approaches: first, a semantic reasoning stage uses a modified LLaVA model (a multimodal AI that understands both images and language) to identify suspicious regions, and second, a refinement stage uses specialized forensic analysis (technical methods that detect tampering traces) to validate and precisely locate the manipulated areas. The framework bridges the gap between AI understanding and forensic detection, achieving better accuracy than previous methods.

IEEE Xplore (Security & AI Journals)
Prev1...110111112113114...371Next