aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3223 items

CVE-2025-65098: Typebot is an open-source chatbot builder. In versions prior to 3.13.2, client-side script execution in Typebot allows s

highvulnerability
securityprivacy
Jan 22, 2026
CVE-2025-65098

Typebot, an open-source chatbot builder, has a vulnerability in versions before 3.13.2 where malicious chatbots can execute JavaScript (code that runs in a user's browser) to steal stored credentials like OpenAI API keys and passwords. The vulnerability exists because an API endpoint returns plaintext credentials without checking if the person requesting them actually owns them.

Fix: Update to Typebot version 3.13.2, which fixes the issue.

NVD/CVE Database

The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time

infonews
securityresearch

CVE-2026-24055: Langfuse is an open source large language model engineering platform. In versions 3.146.0 and below, the /api/public/sla

highvulnerability
security
Jan 21, 2026
CVE-2026-24055

Langfuse versions 3.146.0 and earlier have a security flaw in the Slack integration endpoint that doesn't properly verify users before connecting their Slack workspace to a project. An attacker can exploit this to connect their own Slack workspace to any project without permission, potentially gaining access to prompt changes or replacing automation integrations (configurations that automatically perform tasks when triggered). This vulnerability affects the Prompt Management feature, which stores AI prompts that can be modified.

CVE-2026-22807: vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to versio

highvulnerability
security
Jan 21, 2026
CVE-2026-22807

vLLM (a system for running and serving large language models) had a security flaw in versions 0.10.1 through 0.13.x where it automatically loaded code from model repositories without checking if that code was trustworthy, allowing attackers to run malicious Python commands on the server when a model loads. This vulnerability doesn't require the attacker to have access to the API or send requests; they just need to control which model repository vLLM tries to load from.

CVE-2026-21852: Claude Code is an agentic coding tool. Prior to version 2.0.65, vulnerability in Claude Code's project-load flow allowed

highvulnerability
security
Jan 21, 2026
CVE-2026-21852

Claude Code (an agentic coding tool, meaning an AI that can write and modify code) had a vulnerability before version 2.0.65 where malicious code repositories could steal users' API keys (secret authentication tokens). An attacker could hide a settings file in a repository that redirects API requests to their own server, and Claude Code would send the user's API key there before showing a trust confirmation prompt.

CVE-2025-66960: An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the fs/ggml/gguf.go, function rea

highvulnerability
security
Jan 21, 2026
CVE-2025-66960

CVE-2025-66960 is a vulnerability in Ollama v.0.12.10 where a remote attacker can cause a denial of service (making a service unavailable by overwhelming it) by sending malicious GGUF metadata (a file format used in machine learning). The issue is in the readGGUFV1String function, which reads string length data from untrusted sources without properly validating it.

CVE-2025-66959: An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the GGUF decoder

highvulnerability
security
Jan 21, 2026
CVE-2025-66959

CVE-2025-66959 is a vulnerability in ollama v.0.12.10 that allows a remote attacker to cause a denial of service (making a service unavailable by overwhelming it) through the GGUF decoder (the part of the software that reads GGUF format files). The vulnerability stems from improper input validation and uncontrolled resource consumption in how the decoder processes data.

Copyright Kills Competition

inforegulatory
policy
Jan 21, 2026

The article argues that stronger copyright laws, often promoted as protecting creators from big tech, actually concentrate power among large corporations and create barriers that prevent competition and innovation. In the AI context specifically, requiring developers to license training data would be so expensive that only the largest companies could afford to build AI models, reducing competition and ultimately harming consumers through higher costs and worse services.

CVE-2025-69285: SQLBot is an intelligent data query system based on a large language model and RAG. Versions prior to 1.5.0 contain a mi

mediumvulnerability
security
Jan 21, 2026
CVE-2025-69285

SQLBot is a data query system that uses a large language model and RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions) to help users query databases. Versions before 1.5.0 have a missing authentication vulnerability in a file upload endpoint that allows attackers without login credentials to upload Excel or CSV files and insert data directly into the database, because the endpoint was added to a whitelist that skips security checks.

v0.14.13

lownews
security
Jan 21, 2026

LlamaIndex version 0.14.13 is a release that includes multiple updates across its core library and integrations, featuring new capabilities like early stopping in agent workflows, token-based code splitting, and distributed data ingestion via RayIngestionPipeline. The release also includes several bug fixes, such as correcting error handling in aggregation functions and fixing async integration issues, plus security improvements that removed exposed API keys from notebook outputs.

Generative Artificial Intelligence for Knowledge-Driven Industries: Leveraging Collective Intelligence to Address Discourse Patterns and Sectoral Diffusion

inforesearchPeer-Reviewed
research

Generative Artificial Intelligence in Information Systems Education: Benefits, Challenges and Recommendations

inforesearchPeer-Reviewed
research

CVE-2025-33233: NVIDIA Merlin Transformers4Rec for all platforms contains a vulnerability where an attacker could cause code injection.

highvulnerability
security
Jan 20, 2026
CVE-2025-33233

NVIDIA Merlin Transformers4Rec contains a code injection vulnerability (CWE-94, a weakness where attackers can trick software into running malicious code) that could let attackers execute arbitrary code, gain elevated permissions, steal information, or modify data. The vulnerability affects all platforms running this software. A CVSS severity score has not yet been assigned by NIST.

The Impact of Digital Technology Intensity on Greenhouse Gas Emissions and Natural Resources Consumption

inforesearchPeer-Reviewed
research

CVE-2026-23842: ChatterBot is a machine learning, conversational dialog engine for creating chat bots. ChatterBot versions up to 1.2.10

highvulnerability
security
Jan 19, 2026
CVE-2026-23842

ChatterBot versions up to 1.2.10 have a vulnerability that causes denial-of-service (when a service becomes unavailable due to being overwhelmed), triggered when multiple concurrent calls to the get_response() method exhaust the SQLAlchemy connection pool (a group of reusable database connections). The service becomes unavailable and requires manual restart to recover.

Securing Symmetric Encryption Based on Substitution-Permutation Network Against White-Box Attacks

inforesearchPeer-Reviewed
security

Minting Next.js Authentication Cookies

infonews
security
Jan 15, 2026

An attacker who exploits a React2Shell vulnerability (a deserialization flaw allowing arbitrary code execution) in a Next.js application can steal the NEXTAUTH_SECRET environment variable and use it to mint forged authentication cookies, gaining persistent access as any user. The attacker only needs this one secret value to create valid session tokens because next-auth uses HKDF (HMAC-based Key Derivation Function, which derives encryption keys from a master secret) with predictable salt values based on cookie names.

A 0-click exploit chain for the Pixel 9 Part 2: Cracking the Sandbox with a Big Wave

infonews
security
Jan 14, 2026

A researcher discovered three bugs in the BigWave driver on Pixel 9 phones, including one that allows escaping the mediacodec sandbox (a restricted environment where apps run with limited permissions) to gain kernel arbitrary read/write access. The most dangerous bug is a use-after-free vulnerability (accessing memory that has already been freed), which occurs when a worker thread continues processing a job after the file descriptor managing it has been closed and its memory destroyed.

A 0-click exploit chain for the Pixel 9 Part 1: Decoding Dolby

infonews
security
Jan 14, 2026

Google's security team discovered a critical vulnerability (CVE-2025-54957) in the Dolby Unified Decoder, a library that processes audio formats on Android phones. The vulnerability is dangerous because AI features automatically decode incoming audio messages without user interaction, putting the decoder in the 0-click attack surface (meaning attackers can exploit it without users taking any action). Researchers demonstrated a complete exploit chain on the Pixel 9 that chains multiple vulnerabilities together to gain control of the device, highlighting how media decoder bugs can be practically weaponized on modern Android phones.

CVE-2026-22708: Cursor is a code editor built for programming with AI. Prior to 2.3, hen the Cursor Agent is running in Auto-Run Mode wi

criticalvulnerability
security
Jan 14, 2026
CVE-2026-22708

Cursor is a code editor designed for programming with AI. Before version 2.3, when the Cursor Agent runs in Auto-Run Mode with Allowlist mode enabled (a security setting that restricts which commands can run), attackers could bypass this protection by using prompt injection (tricking the AI by hiding instructions in its input) to execute shell built-ins (basic operating system commands) and modify environment variables (settings that affect how programs behave). This vulnerability allows attackers to compromise the shell environment without user approval.

Previous66 / 162Next
Jan 22, 2026

Attackers can use large language models (LLMs, AI systems trained on vast amounts of text to generate human-like responses) to create phishing pages that appear safe at first but transform into malicious sites after a victim visits them. The attack works by having a webpage secretly request the LLM to generate malicious JavaScript (code that runs in web browsers) using carefully crafted prompts that trick the AI into ignoring its safety rules, then assembling and running this code inside the victim's browser in real time. Because the malicious code is generated fresh each time and comes from trusted AI services, it bypasses traditional network security checks.

Fix: The source explicitly recommends runtime behavioral analysis to detect and block malicious activity at the point of execution within the browser. Palo Alto Networks customers are advised to use Advanced URL Filtering, Prisma AIRS, and Prisma Browser with Advanced Web Protection. Organizations are also encouraged to use the Unit 42 AI Security Assessment to help ensure safe AI use and development.

Palo Alto Unit 42

Fix: This issue has been fixed in version 3.147.0.

NVD/CVE Database

Fix: Upgrade to vLLM version 0.14.0, which fixes this issue.

NVD/CVE Database

Fix: Update Claude Code to version 2.0.65 or later. The source states: 'Users on standard Claude Code auto-update have received this fix already. Users performing manual updates are advised to update to version 2.0.65, which contains a patch, or to the latest version.'

NVD/CVE Database
NVD/CVE Database
NVD/CVE Database
EFF Deeplinks Blog

Fix: Update to version 1.5.0 or later, where the vulnerability has been fixed.

NVD/CVE Database
LlamaIndex Security Releases
Jan 21, 2026

This research analyzes how discussions about Generative AI spread across different industries (like media, healthcare, and finance) in the six months after ChatGPT's release, using social media data and innovation theory. The study found that different industries had different concerns: media and marketing focused on content generation with positive views, while healthcare and finance were more cautious and focused on analysis. Misinformation was the biggest concern overall, and the research showed that emotional reactions (sentiment) were the main factor driving how quickly information about AI spread between people.

AIS eLibrary (Journal of AIS, CAIS, etc.)
Jan 21, 2026

Generative artificial intelligence (GAI, AI systems that create new text, images, or code) is significantly changing how information systems are taught in universities. IS educators are discussing both the benefits and risks of GAI, including concerns about academic integrity (students using AI to cheat), and they are developing recommendations for how to responsibly teach with and about GAI in the classroom.

AIS eLibrary (Journal of AIS, CAIS, etc.)
NVD/CVE Database
Jan 20, 2026

This research paper analyzes how companies that invest in digital technologies, including AI, affect their greenhouse gas emissions and natural resource use. The study found that companies investing in these technologies tend to reduce their emissions and consume fewer natural resources, suggesting that digital tools can help address environmental challenges.

AIS eLibrary (Journal of AIS, CAIS, etc.)

Fix: Version 1.2.11 fixes the issue.

NVD/CVE Database
Jan 15, 2026

This paper addresses white-box attacks (scenarios where attackers can see all the inner workings of an encryption system and control the computer it runs on), which are harder to defend against than black-box attacks (where attackers cannot see the implementation). The authors propose a new method to protect symmetric encryption algorithms that use substitution-permutation networks (a common encryption structure that substitutes and rearranges data) by adding secret components to lookup tables, making the encryption stronger without changing the final encrypted message.

IEEE Xplore (Security & AI Journals)

Fix: Ensure all secrets are rotated regularly, including the NEXTAUTH_SECRET or the newer AUTH_SECRET. The source also recommends these detection approaches: log the JWT ID on every session and alert on duplicates from different IP addresses; identify impossible travel by users; monitor for sessions without corresponding login events in auth logs; and watch for off-hours access or unusual user-agent strings.

Embrace The Red

Fix: Fixes were made available for all three bugs on January 5, 2026.

Google Project Zero

Fix: The vulnerabilities discussed in these posts were fixed as of January 5, 2026.

Google Project Zero

Fix: This vulnerability is fixed in 2.3.

NVD/CVE Database