aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 227/371
VIEW ALL
01

Nonparametric Estimation of a Factorizable Density using Diffusion Models

research
Dec 31, 2025

This research paper studies diffusion models, a type of AI used to generate images and audio, as a statistical method for density estimation (learning the probability distribution of data). The authors show that when data has a factorizable structure (meaning it can be broken into independent low-dimensional components, like in Bayesian networks), diffusion models can efficiently learn this structure and achieve optimal performance using a specially designed sparse neural network architecture (one where most connections between neurons are inactive).

JMLR (Journal of Machine Learning Research)
02

CVE-2025-62154: Missing Authorization vulnerability in Recorp AI Content Writing Assistant (Content Writer, ChatGPT, Image Generator) Al

security
Dec 31, 2025

A missing authorization vulnerability (CWE-862, a weakness where the system fails to check if a user has permission to access something) was found in the Recorp AI Content Writing Assistant plugin for WordPress, affecting versions up to 1.1.7. This flaw allows attackers to exploit incorrectly configured access control, meaning they could potentially access features or data they shouldn't be able to reach.

NVD/CVE Database
03

Adoption of ChatGPT in Organizations: Technology Affordance and Constraints Theory Perspective

research
Dec 31, 2025

This research studied what makes knowledge workers (people whose jobs involve handling information) want to use ChatGPT at work, using technology affordance and constraints theory (a framework explaining how tools enable certain actions while limiting others). The study found that ChatGPT's benefits like automation, information quality, and productivity boost adoption, but concerns about risk and lack of regulation reduce it. Personal innovativeness (how open someone is to new ideas) and supportive workplace culture help workers embrace ChatGPT despite their concerns.

AIS eLibrary (Journal of AIS, CAIS, etc.)
04

CVE-2025-62116: Missing Authorization vulnerability in Quadlayers AI Copilot allows Exploiting Incorrectly Configured Access Control Sec

security
Dec 31, 2025

CVE-2025-62116 is a missing authorization vulnerability (a security flaw where the software fails to check if a user has permission to perform an action) in Quadlayers AI Copilot that affects versions up to 1.4.7. The vulnerability allows attackers to exploit incorrectly configured access control security levels, meaning they may be able to access or perform actions they shouldn't be allowed to.

NVD/CVE Database
05

Agentic ProbLLMs: Exploiting AI Computer-Use And Coding Agents (39C3 Video + Slides)

securityresearch
Dec 31, 2025

This presentation covers security vulnerabilities found in agentic systems, which are AI agents (systems that can take actions autonomously) that can use computers and write code. The talk includes demonstrations of exploits discovered during the Month of AI Bugs, a security research initiative focused on finding bugs in AI systems.

Embrace The Red
06

v0.14.12

security
Dec 29, 2025

This is a release of llama-index v0.14.12, a framework for building AI applications, containing various updates across multiple components including bug fixes, new features for asynchronous tool support, and improvements to integrations with services like OpenAI, Google, Anthropic, and various vector stores (databases that store numerical representations of data for AI searching). Key fixes address issues like crashes in logging, missing parameters in tool handling, and compatibility improvements for newer Python versions.

LlamaIndex Security Releases
07

CVE-2025-67729: LMDeploy is a toolkit for compressing, deploying, and serving LLMs. Prior to version 0.11.1, an insecure deserialization

security
Dec 26, 2025

LMDeploy is a toolkit for compressing, deploying, and serving large language models (LLMs). Prior to version 0.11.1, the software had an insecure deserialization vulnerability (unsafe conversion of data back into executable code) where it used torch.load() without the weights_only=True parameter when opening model checkpoint files, allowing attackers to run arbitrary code on a victim's machine by tricking them into loading a malicious .bin or .pt model file.

Fix: This issue has been patched in version 0.11.1.

NVD/CVE Database
08

HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack

securityresearch
Dec 26, 2025

Hypergraph Neural Networks (HGNNs, which are AI models that learn from data where connections can link multiple items together instead of just pairs) can be weakened by structural attacks that corrupt their connections and reduce accuracy. HGNN Shield is a defense framework with two main components: Hyperedge-Dependent Estimation (which assesses how important each connection is within the network) and High-Order Shield (which detects and removes harmful connections before the AI processes data). Experiments show the framework improves performance by an average of 9.33% compared to existing defenses.

Fix: The HGNN Shield defense framework addresses the vulnerability through two modules: (1) Hyperedge-Dependent Estimation (HDE) that 'prioritizes vertex dependencies within hyperedges and adapts traditional connectivity measures to hypergraphs, facilitating precise structural modifications,' and (2) High-Order Shield (HOS) positioned before convolutional layers, which 'consists of three submodules: Hyperpath Cut, Hyperpath Link, and Hyperpath Refine' that 'collectively detect, disconnect, and refine adversarial connections, ensuring robust message propagation.'

IEEE Xplore (Security & AI Journals)
09

CVE-2025-68665: LangChain is a framework for building LLM-powered applications. Prior to @langchain/core versions 0.3.80 and 1.1.8, and

security
Dec 23, 2025

LangChain, a framework for building applications powered by LLMs (large language models), had a serialization injection vulnerability (a flaw where specially crafted data can be misinterpreted as legitimate code during the conversion of objects to JSON format) in its toJSON() method. The vulnerability occurred because the method failed to properly escape objects containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating malicious user data as legitimate LangChain objects when deserializing (converting back from JSON format).

Fix: Update @langchain/core to version 0.3.80 or 1.1.8, and update langchain to version 0.3.37 or 1.2.3. According to the source: 'This issue has been patched in @langchain/core versions 0.3.80 and 1.1.8, and langchain versions 0.3.37 and 1.2.3.'

NVD/CVE Database
10

CVE-2025-68664: LangChain is a framework for building agents and LLM-powered applications. Prior to versions 0.3.81 and 1.2.5, a seriali

security
Dec 23, 2025

LangChain, a framework for building AI agents and applications powered by large language models, had a serialization injection vulnerability (a flaw in how it converts data to stored formats) in its dumps() and dumpd() functions before versions 0.3.81 and 1.2.5. The functions failed to properly escape dictionaries containing 'lc' keys, which LangChain uses internally to mark serialized objects, allowing attackers to trick the system into treating user-supplied data as legitimate LangChain objects during deserialization (converting stored data back into usable form).

Fix: Update to LangChain version 0.3.81 or version 1.2.5, where this issue has been patched.

NVD/CVE Database
Prev1...225226227228229...371Next