All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
Typebot, an open-source chatbot builder, has a vulnerability in versions before 3.13.2 where malicious chatbots can execute JavaScript (code that runs in a user's browser) to steal stored credentials like OpenAI API keys and passwords. The vulnerability exists because an API endpoint returns plaintext credentials without checking if the person requesting them actually owns them.
Fix: Update to Typebot version 3.13.2, which fixes the issue.
NVD/CVE DatabaseLangfuse versions 3.146.0 and earlier have a security flaw in the Slack integration endpoint that doesn't properly verify users before connecting their Slack workspace to a project. An attacker can exploit this to connect their own Slack workspace to any project without permission, potentially gaining access to prompt changes or replacing automation integrations (configurations that automatically perform tasks when triggered). This vulnerability affects the Prompt Management feature, which stores AI prompts that can be modified.
vLLM (a system for running and serving large language models) had a security flaw in versions 0.10.1 through 0.13.x where it automatically loaded code from model repositories without checking if that code was trustworthy, allowing attackers to run malicious Python commands on the server when a model loads. This vulnerability doesn't require the attacker to have access to the API or send requests; they just need to control which model repository vLLM tries to load from.
Claude Code (an agentic coding tool, meaning an AI that can write and modify code) had a vulnerability before version 2.0.65 where malicious code repositories could steal users' API keys (secret authentication tokens). An attacker could hide a settings file in a repository that redirects API requests to their own server, and Claude Code would send the user's API key there before showing a trust confirmation prompt.
CVE-2025-66960 is a vulnerability in Ollama v.0.12.10 where a remote attacker can cause a denial of service (making a service unavailable by overwhelming it) by sending malicious GGUF metadata (a file format used in machine learning). The issue is in the readGGUFV1String function, which reads string length data from untrusted sources without properly validating it.
CVE-2025-66959 is a vulnerability in ollama v.0.12.10 that allows a remote attacker to cause a denial of service (making a service unavailable by overwhelming it) through the GGUF decoder (the part of the software that reads GGUF format files). The vulnerability stems from improper input validation and uncontrolled resource consumption in how the decoder processes data.
The article argues that stronger copyright laws, often promoted as protecting creators from big tech, actually concentrate power among large corporations and create barriers that prevent competition and innovation. In the AI context specifically, requiring developers to license training data would be so expensive that only the largest companies could afford to build AI models, reducing competition and ultimately harming consumers through higher costs and worse services.
SQLBot is a data query system that uses a large language model and RAG (retrieval-augmented generation, where an AI pulls in external documents to answer questions) to help users query databases. Versions before 1.5.0 have a missing authentication vulnerability in a file upload endpoint that allows attackers without login credentials to upload Excel or CSV files and insert data directly into the database, because the endpoint was added to a whitelist that skips security checks.
LlamaIndex version 0.14.13 is a release that includes multiple updates across its core library and integrations, featuring new capabilities like early stopping in agent workflows, token-based code splitting, and distributed data ingestion via RayIngestionPipeline. The release also includes several bug fixes, such as correcting error handling in aggregation functions and fixing async integration issues, plus security improvements that removed exposed API keys from notebook outputs.
NVIDIA Merlin Transformers4Rec contains a code injection vulnerability (CWE-94, a weakness where attackers can trick software into running malicious code) that could let attackers execute arbitrary code, gain elevated permissions, steal information, or modify data. The vulnerability affects all platforms running this software. A CVSS severity score has not yet been assigned by NIST.
ChatterBot versions up to 1.2.10 have a vulnerability that causes denial-of-service (when a service becomes unavailable due to being overwhelmed), triggered when multiple concurrent calls to the get_response() method exhaust the SQLAlchemy connection pool (a group of reusable database connections). The service becomes unavailable and requires manual restart to recover.
An attacker who exploits a React2Shell vulnerability (a deserialization flaw allowing arbitrary code execution) in a Next.js application can steal the NEXTAUTH_SECRET environment variable and use it to mint forged authentication cookies, gaining persistent access as any user. The attacker only needs this one secret value to create valid session tokens because next-auth uses HKDF (HMAC-based Key Derivation Function, which derives encryption keys from a master secret) with predictable salt values based on cookie names.
A researcher discovered three bugs in the BigWave driver on Pixel 9 phones, including one that allows escaping the mediacodec sandbox (a restricted environment where apps run with limited permissions) to gain kernel arbitrary read/write access. The most dangerous bug is a use-after-free vulnerability (accessing memory that has already been freed), which occurs when a worker thread continues processing a job after the file descriptor managing it has been closed and its memory destroyed.
Google's security team discovered a critical vulnerability (CVE-2025-54957) in the Dolby Unified Decoder, a library that processes audio formats on Android phones. The vulnerability is dangerous because AI features automatically decode incoming audio messages without user interaction, putting the decoder in the 0-click attack surface (meaning attackers can exploit it without users taking any action). Researchers demonstrated a complete exploit chain on the Pixel 9 that chains multiple vulnerabilities together to gain control of the device, highlighting how media decoder bugs can be practically weaponized on modern Android phones.
Cursor is a code editor designed for programming with AI. Before version 2.3, when the Cursor Agent runs in Auto-Run Mode with Allowlist mode enabled (a security setting that restricts which commands can run), attackers could bypass this protection by using prompt injection (tricking the AI by hiding instructions in its input) to execute shell built-ins (basic operating system commands) and modify environment variables (settings that affect how programs behave). This vulnerability allows attackers to compromise the shell environment without user approval.
Attackers can use large language models (LLMs, AI systems trained on vast amounts of text to generate human-like responses) to create phishing pages that appear safe at first but transform into malicious sites after a victim visits them. The attack works by having a webpage secretly request the LLM to generate malicious JavaScript (code that runs in web browsers) using carefully crafted prompts that trick the AI into ignoring its safety rules, then assembling and running this code inside the victim's browser in real time. Because the malicious code is generated fresh each time and comes from trusted AI services, it bypasses traditional network security checks.
Fix: The source explicitly recommends runtime behavioral analysis to detect and block malicious activity at the point of execution within the browser. Palo Alto Networks customers are advised to use Advanced URL Filtering, Prisma AIRS, and Prisma Browser with Advanced Web Protection. Organizations are also encouraged to use the Unit 42 AI Security Assessment to help ensure safe AI use and development.
Palo Alto Unit 42Fix: This issue has been fixed in version 3.147.0.
NVD/CVE DatabaseFix: Upgrade to vLLM version 0.14.0, which fixes this issue.
NVD/CVE DatabaseFix: Update Claude Code to version 2.0.65 or later. The source states: 'Users on standard Claude Code auto-update have received this fix already. Users performing manual updates are advised to update to version 2.0.65, which contains a patch, or to the latest version.'
NVD/CVE DatabaseFix: Update to version 1.5.0 or later, where the vulnerability has been fixed.
NVD/CVE DatabaseThis research analyzes how discussions about Generative AI spread across different industries (like media, healthcare, and finance) in the six months after ChatGPT's release, using social media data and innovation theory. The study found that different industries had different concerns: media and marketing focused on content generation with positive views, while healthcare and finance were more cautious and focused on analysis. Misinformation was the biggest concern overall, and the research showed that emotional reactions (sentiment) were the main factor driving how quickly information about AI spread between people.
Generative artificial intelligence (GAI, AI systems that create new text, images, or code) is significantly changing how information systems are taught in universities. IS educators are discussing both the benefits and risks of GAI, including concerns about academic integrity (students using AI to cheat), and they are developing recommendations for how to responsibly teach with and about GAI in the classroom.
This research paper analyzes how companies that invest in digital technologies, including AI, affect their greenhouse gas emissions and natural resource use. The study found that companies investing in these technologies tend to reduce their emissions and consume fewer natural resources, suggesting that digital tools can help address environmental challenges.
Fix: Version 1.2.11 fixes the issue.
NVD/CVE DatabaseThis paper addresses white-box attacks (scenarios where attackers can see all the inner workings of an encryption system and control the computer it runs on), which are harder to defend against than black-box attacks (where attackers cannot see the implementation). The authors propose a new method to protect symmetric encryption algorithms that use substitution-permutation networks (a common encryption structure that substitutes and rearranges data) by adding secret components to lookup tables, making the encryption stronger without changing the final encrypted message.
Fix: Ensure all secrets are rotated regularly, including the NEXTAUTH_SECRET or the newer AUTH_SECRET. The source also recommends these detection approaches: log the JWT ID on every session and alert on duplicates from different IP addresses; identify impossible travel by users; monitor for sessions without corresponding login events in auth logs; and watch for off-hours access or unusual user-agent strings.
Embrace The RedFix: Fixes were made available for all three bugs on January 5, 2026.
Google Project ZeroFix: The vulnerabilities discussed in these posts were fixed as of January 5, 2026.
Google Project ZeroFix: This vulnerability is fixed in 2.3.
NVD/CVE Database