aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 223/371
VIEW ALL
01

CVE-2025-66960: An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the fs/ggml/gguf.go, function rea

security
Jan 21, 2026

CVE-2025-66960 is a vulnerability in Ollama v.0.12.10 where a remote attacker can cause a denial of service (making a service unavailable by overwhelming it) by sending malicious GGUF metadata (a file format used in machine learning). The issue is in the readGGUFV1String function, which reads string length data from untrusted sources without properly validating it.

NVD/CVE Database
02

CVE-2025-66959: An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the GGUF decoder

security
Jan 21, 2026

CVE-2025-66959 is a vulnerability in ollama v.0.12.10 that allows a remote attacker to cause a denial of service (making a service unavailable by overwhelming it) through the GGUF decoder (the part of the software that reads GGUF format files). The vulnerability stems from improper input validation and uncontrolled resource consumption in how the decoder processes data.

NVD/CVE Database
03

Copyright Kills Competition

policy
Jan 21, 2026

The article argues that stronger copyright laws, often promoted as protecting creators from big tech, actually concentrate power among large corporations and create barriers that prevent competition and innovation. In the AI context specifically, requiring developers to license training data would be so expensive that only the largest companies could afford to build AI models, reducing competition and ultimately harming consumers through higher costs and worse services.

EFF Deeplinks Blog
04

CVE-2025-69285: SQLBot is an intelligent data query system based on a large language model and RAG. Versions prior to 1.5.0 contain a mi

security
Jan 21, 2026

SQLBot is a data query system that uses a large language model and RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions) to help users query databases. Versions before 1.5.0 have a missing authentication vulnerability in a file upload endpoint that allows attackers without login credentials to upload Excel or CSV files and insert data directly into the database, because the endpoint was added to a whitelist that skips security checks.

Fix: Update to version 1.5.0 or later, where the vulnerability has been fixed.

NVD/CVE Database
05

v0.14.13

security
Jan 21, 2026

LlamaIndex version 0.14.13 is a release that includes multiple updates across its core library and integrations, featuring new capabilities like early stopping in agent workflows, token-based code splitting, and distributed data ingestion via RayIngestionPipeline. The release also includes several bug fixes, such as correcting error handling in aggregation functions and fixing async integration issues, plus security improvements that removed exposed API keys from notebook outputs.

LlamaIndex Security Releases
06

Generative Artificial Intelligence for Knowledge-Driven Industries: Leveraging Collective Intelligence to Address Discourse Patterns and Sectoral Diffusion

research
Jan 21, 2026

This research analyzes how discussions about Generative AI spread across different industries (like media, healthcare, and finance) in the six months after ChatGPT's release, using social media data and innovation theory. The study found that different industries had different concerns: media and marketing focused on content generation with positive views, while healthcare and finance were more cautious and focused on analysis. Misinformation was the biggest concern overall, and the research showed that emotional reactions (sentiment) were the main factor driving how quickly information about AI spread between people.

AIS eLibrary (Journal of AIS, CAIS, etc.)
07

Generative Artificial Intelligence in Information Systems Education: Benefits, Challenges and Recommendations

research
Jan 21, 2026

Generative artificial intelligence (GAI, AI systems that create new text, images, or code) is significantly changing how information systems are taught in universities. IS educators are discussing both the benefits and risks of GAI, including concerns about academic integrity (students using AI to cheat), and they are developing recommendations for how to responsibly teach with and about GAI in the classroom.

AIS eLibrary (Journal of AIS, CAIS, etc.)
08

CVE-2025-33233: NVIDIA Merlin Transformers4Rec for all platforms contains a vulnerability where an attacker could cause code injection.

security
Jan 20, 2026

NVIDIA Merlin Transformers4Rec contains a code injection vulnerability (CWE-94, a weakness where attackers can trick software into running malicious code) that could let attackers execute arbitrary code, gain elevated permissions, steal information, or modify data. The vulnerability affects all platforms running this software. A CVSS severity score has not yet been assigned by NIST.

NVD/CVE Database
09

The Impact of Digital Technology Intensity on Greenhouse Gas Emissions and Natural Resources Consumption

research
Jan 20, 2026

This research paper analyzes how companies that invest in digital technologies, including AI, affect their greenhouse gas emissions and natural resource use. The study found that companies investing in these technologies tend to reduce their emissions and consume fewer natural resources, suggesting that digital tools can help address environmental challenges.

AIS eLibrary (Journal of AIS, CAIS, etc.)
10

CVE-2026-23842: ChatterBot is a machine learning, conversational dialog engine for creating chat bots. ChatterBot versions up to 1.2.10

security
Jan 19, 2026

ChatterBot versions up to 1.2.10 have a vulnerability that causes denial-of-service (when a service becomes unavailable due to being overwhelmed), triggered when multiple concurrent calls to the get_response() method exhaust the SQLAlchemy connection pool (a group of reusable database connections). The service becomes unavailable and requires manual restart to recover.

Fix: Version 1.2.11 fixes the issue.

NVD/CVE Database
Prev1...221222223224225...371Next