aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 230/371
VIEW ALL
01

CVE-2025-63389: A critical authentication bypass vulnerability exists in Ollama platform's API endpoints in versions prior to and includ

security
Dec 18, 2025

CVE-2025-63389 is a critical vulnerability in Ollama (an AI platform) versions up to v0.12.3 where API endpoints (connection points for software communication) are exposed without authentication (verification of identity), allowing attackers to remotely perform unauthorized model management operations. The vulnerability stems from missing authentication checks on critical functions.

NVD/CVE Database
02

CVE-2025-62998: Insertion of Sensitive Information Into Sent Data vulnerability in WP Messiah WP AI CoPilot allows Retrieve Embedded Sen

security
Dec 18, 2025

CVE-2025-62998 is a vulnerability in WP AI CoPilot (a WordPress plugin that adds AI features) versions 1.2.7 and earlier, where sensitive information can be unintentionally included in data sent from the plugin. This is classified as CWE-201 (insertion of sensitive information into sent data), meaning the plugin may leak private or confidential data to unintended recipients.

NVD/CVE Database
03

CVE-2025-63390: An authentication bypass vulnerability exists in AnythingLLM v1.8.5 in via the /api/workspaces endpoint. The endpoint fa

security
Dec 18, 2025

AnythingLLM v1.8.5 has a vulnerability in its /api/workspaces endpoint (a web address used to access workspace data) that skips authentication checks, allowing attackers without permission to see detailed information about all workspaces, including AI model settings, system prompts (instructions given to the AI), and other configuration details. This means someone could potentially discover sensitive workspace configurations without needing to log in.

NVD/CVE Database
04

AI Safety Newsletter #67: Trump’s preemption executive order

policy
Dec 17, 2025

President Trump issued an executive order to prevent states from regulating AI by using federal tools like funding withholding and legal challenges, aiming to replace varied state rules with a single federal framework. The order directs federal agencies, including the Attorney General and Commerce Secretary, to challenge state AI laws they view as problematic, while the FTC and FCC will issue guidance on how existing federal laws apply to AI. This action follows a year where ambitious state AI safety proposals, like New York's RAISE Act (which would require AI labs to publish safety practices and report serious incidents), were either weakened or blocked.

CAIS AI Safety Newsletter
05

Model Steganography During Model Compression

securityresearch
Dec 17, 2025

Researchers have developed a steganographic method (hiding secret data inside another medium) that embeds hidden messages into compressed neural network models (AI systems made smaller through techniques like quantization, pruning, or distillation). The approach allows a receiver with the correct extraction network to recover the hidden data while ordinary users remain unaware it exists, and the method maintains the model's performance in size, speed, and accuracy.

IEEE Xplore (Security & AI Journals)
06

Trap: Mitigating Poisoning-Based Backdoor Attacks by Treating Poison With Poison

securityresearch
Dec 15, 2025

This research addresses backdoor attacks, where poisoned training data (maliciously altered samples inserted into a dataset) causes neural networks to behave incorrectly on specific inputs. The authors propose a defense method called Trap that detects poisoned samples early in training by recognizing they cluster separately from legitimate data, then removes the backdoor by retraining part of the model on relabeled poisoned samples, achieving very high attack detection rates with minimal accuracy loss.

Fix: The paper proposes detecting poisoned samples during early training stages and removing the backdoor by retraining the classifier part of the model on relabeled poisoned samples. The authors report their method reduced average attack success rate to 0.07% while only decreasing average accuracy by 0.33% across twelve attacks on four datasets.

IEEE Xplore (Security & AI Journals)
07

Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models

securityresearch
Dec 15, 2025

Researchers found that text-to-image diffusion models (AI systems that generate images from text descriptions) can be attacked using backdoors, which are hidden triggers in text that make the model produce unwanted outputs. This paper proposes Dynamic Attention Analysis (DAA), a new detection method that tracks how the model's attention mechanisms (the parts of the AI that focus on relevant information) change over time, since backdoor attacks create different patterns than normal operation. The method achieved strong detection results, correctly identifying backdoored samples about 79% of the time.

IEEE Xplore (Security & AI Journals)
08

CVE-2025-67819: An issue was discovered in Weaviate OSS before 1.33.4. Due to a lack of validation of the fileName field in the transfer

security
Dec 12, 2025

Weaviate OSS (open-source software) versions before 1.33.4 have a vulnerability where the fileName field is not properly validated in the transfer logic. An attacker who can call the GetFile method while a shard (a part of a database) is paused and the FileReplicationService (the system that copies files) is accessible could read any files that the service has permission to access.

Fix: Upgrade to Weaviate OSS version 1.33.4 or later.

NVD/CVE Database
09

CVE-2025-67818: An issue was discovered in Weaviate OSS before 1.33.4. An attacker with access to insert data into the database can craf

security
Dec 12, 2025

Weaviate OSS (an open-source vector database) before version 1.33.4 has a path traversal vulnerability (a bug where an attacker can access files outside the intended directory using tricks like ../../..) that allows attackers with database write access to escape the backup restore location and create or overwrite files elsewhere on the system. This could let attackers modify critical files within the application's permissions.

Fix: Upgrade Weaviate OSS to version 1.33.4 or later.

NVD/CVE Database
10

Exploring the Agentic Metaverse’s Potential for Transforming Cybersecurity Workforce Development

researchpolicy
Dec 12, 2025

Researchers studied an AI-driven metaverse prototype (a 3D virtual environment enhanced with multi-agent systems, or software that can act independently) designed to train cybersecurity professionals, gathering feedback from 53 experts. The study found that this technology could create personalized, scalable training experiences but identified implementation challenges and proposed six recommendations for organizations considering adopting it.

AIS eLibrary (Journal of AIS, CAIS, etc.)
Prev1...228229230231232...371Next