aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1267 items

EFF in the Press: 2025 in Review

infonews
policy
Dec 29, 2025

The Electronic Frontier Foundation (EFF) received thousands of media mentions in 2025 while advocating for digital civil liberties, particularly regarding surveillance technologies like ALPRs (automated license plate readers, which scan vehicle plates automatically) and police use of doorbell cameras. The organization also pursued lawsuits challenging government data sharing and privacy violations, and spoke out against age-verification laws that threaten privacy and free expression.

EFF Deeplinks Blog

Can chatbots craft correct code?

infonews
safetyresearch

AI Safety Newsletter #67: Trump’s preemption executive order

inforegulatory
policy
Dec 17, 2025

President Trump issued an executive order to prevent states from regulating AI by using federal tools like funding withholding and legal challenges, aiming to replace varied state rules with a single federal framework. The order directs federal agencies, including the Attorney General and Commerce Secretary, to challenge state AI laws they view as problematic, while the FTC and FCC will issue guidance on how existing federal laws apply to AI. This action follows a year where ambitious state AI safety proposals, like New York's RAISE Act (which would require AI labs to publish safety practices and report serious incidents), were either weakened or blocked.

Thinking Outside The Box [dusted off draft from 2017]

infonews
security
Dec 16, 2025

This post describes a vulnerability in VirtualBox's NAT (network address translation, a mode that makes VM traffic look like it comes from the host computer) networking code, specifically in how it manages memory for packet data using a custom zone allocator. The vulnerability exists because safety checks that verify memory integrity use Assert() statements, which are disabled in the standard release builds of VirtualBox that users download, allowing potential exploitation.

Windows Exploitation Techniques: Winning Race Conditions with Path Lookups

infonews
security
Dec 16, 2025

This article explains race condition vulnerabilities (security gaps that occur when a system state changes between a security check and a resource access) in Windows and describes techniques to expand the narrow time window needed to exploit them. The author focuses on slowing down the Object Manager Namespace lookup process (the kernel system that finds named objects like files and events in Windows NT) by manipulating Symbolic Links (redirects in the object naming system) to create larger exploitation windows.

A look at an Android ITW DNG exploit

infonews
security
Dec 12, 2025

Between July 2024 and February 2025, malicious DNG files (a raw image format) were discovered that exploited a Samsung vulnerability through the Quram image parsing library. These files were sent via WhatsApp and triggered a spyware infection when users clicked to download the images, which then allowed the malware to run within Samsung's com.samsung.ipservice process, a system service that automatically scans images for AI-powered features.

Introducing mrva, a terminal-first approach to CodeQL multi-repo variant analysis

infonews
securityresearch

The Normalization of Deviance in AI

infonews
safetyresearch

v0.14.10

infonews
industry
Dec 4, 2025

Version 0.14.10 of llama-index-core added a mock function calling LLM (a simulated language model that can pretend to execute functions), while related packages fixed typos and added new integrations like Airweave tool support for advanced search capabilities. This is a routine software release with feature additions and bug fixes.

v0.14.9

infonews
industry
Dec 2, 2025

LlamaIndex released version 0.14.9 with updates across multiple components, including bug fixes for vector stores (systems that store converted data in a format AI models can search), support for new AI models like Claude Opus 4.5 and GPT-5.1, and improvements to integrations with services like Azure, Bedrock, and Qdrant. The release addresses issues with memory management, async operations (non-blocking code that runs in parallel), and various database connectors.

AI Safety Newsletter #66: Evaluating Frontier Models, New Gemini and Claude, Preemption is Back

infonews
safetyresearch

Antigravity Grounded! Security Vulnerabilities in Google's Latest IDE

highnews
security
Nov 25, 2025

Google's new Antigravity IDE inherits multiple security vulnerabilities from the Windsurf codebase it was licensed from, including remote command execution (RCE, where an attacker can run commands on a system they don't own) via indirect prompt injection (tricking an AI by hiding instructions in its input), hidden instruction execution, and data exfiltration. The IDE's default setting allows the AI to automatically execute terminal commands without human review, relying on the language model's judgment to determine if a command is safe, which researchers have successfully bypassed with working exploits.

Level up your Solidity LLM tooling with Slither-MCP

infonews
industry
Nov 15, 2025

Slither-MCP is a new tool that connects LLMs (large language models) with Slither's static analysis engine (a tool that examines code without running it to find bugs), making it easier for AI systems to analyze and audit smart contracts written in Solidity (a programming language for blockchain). Instead of using basic search tools, LLMs can now directly ask Slither to find function implementations and security issues more accurately and efficiently.

How we avoided side-channels in our new post-quantum Go cryptography libraries

infonews
security
Nov 14, 2025

Trail of Bits released open-source Go implementations of ML-DSA and SLH-DSA, two NIST-standardized post-quantum signature algorithms (cryptographic methods designed to resist attacks from quantum computers). The team engineered these libraries to be constant-time, meaning they execute in the same amount of time regardless of input values, to prevent side-channel attacks (security breaches that exploit physical characteristics like timing or power consumption rather than the algorithm itself) like the KyberSlash vulnerability that affected earlier Kyber implementations.

v0.14.8

lownews
security
Nov 10, 2025

This release notes document describes version updates across multiple llama-index (a framework for building AI applications with language models) components, including fixes for bugs like a ReActOutputParser (a tool that interprets AI agent outputs) getting stuck, improved support for multiple AI model providers like OpenAI and Google Gemini, and updates to various integrations with external services. The updates span from core functionality fixes to documentation improvements and SDK compatibility updates across dozens of sub-packages.

Modifying AI Under the EU AI Act: Lessons from Practice on Classification and Compliance

inforegulatory
policy
Nov 5, 2025

Under the EU AI Act, organizations that modify existing AI systems or general-purpose AI models (GPAI models, which are foundational AI systems designed to perform many different tasks) may become legally classified as "providers" and face significant compliance responsibilities. The article explains that modifications triggering higher compliance burdens typically involve high-risk AI systems or substantial changes to a GPAI model's capabilities or generality, such as fine-tuning (customizing a model for specific tasks). Proper assessment of whether a modification triggers provider status is critical, since misclassification can result in fines up to €15 million or 3% of global annual revenue.

v0.14.7

infonews
industry
Oct 30, 2025

LlamaIndex released version 0.14.7 and several component updates that add new features and fix bugs across the platform. Key updates include integrations with tool-calling features for multiple AI models (Anthropic, Mistral, Ollama), new support for GitHub App authentication, and fixes for failing tests and documentation issues. These changes improve how LlamaIndex connects to different AI services and external tools.

AI Safety Newsletter #65: Measuring Automation and Superintelligence Moratorium Letter

infonews
policyresearch

Claude Pirate: Abusing Anthropic's File API For Data Exfiltration

highnews
security
Oct 28, 2025

Anthropic added network request capabilities to Claude's Code Interpreter, which creates a security risk for data exfiltration (unauthorized stealing of sensitive information). An attacker, either controlling the AI model or using indirect prompt injection (hidden malicious instructions in a document the AI processes), could abuse Anthropic's own APIs to steal data that a user has access to, rather than using typical methods like hidden links.

v0.14.6

lownews
security
Oct 25, 2025

LlamaIndex v0.14.6 is a software update released on October 26, 2025, that fixes various bugs across multiple components including support for parallel tool calls, metadata handling, embedding format compatibility, and SQL injection vulnerabilities (using parameterized queries instead of raw SQL string concatenation). The release also adds new features like async support for retrievers and integrations with new services like Helicone.

Previous50 / 64Next
Dec 19, 2025

The article argues that while AI language models (LLMs, systems trained on large amounts of text to generate responses) and traditional programming languages both increase abstraction, they differ fundamentally in a critical way: compilers are deterministic (they reliably produce the same output every time), while LLMs are nondeterministic (they produce different outputs for the same input). This matters for software security and correctness because compilers preserve the programmer's intended meaning through the translation process, but LLMs cannot guarantee they will generate code that does what you actually need.

Trail of Bits Blog
CAIS AI Safety Newsletter
Google Project Zero
Google Project Zero

Fix: The exploited Samsung vulnerability was fixed in April 2025.

Google Project Zero
Dec 11, 2025

GitHub's CodeQL multi-repository variant analysis (MRVA) lets you run security bug-finding queries across thousands of projects quickly, but it's built mainly for VS Code. A developer created mrva, a terminal-based alternative that runs on your machine and works with command-line tools, letting you download pre-built CodeQL databases (collections of code information), analyze them with queries, and display results in the terminal.

Trail of Bits Blog
Dec 4, 2025

The AI industry is gradually accepting LLM (large language model) outputs as reliable without questioning them, similar to how NASA ignored warning signs before the Challenger disaster. This 'normalization of deviance' (accepting behavior that deviates from proper standards as normal) is particularly risky in agentic systems (AI systems that can take independent actions without human approval at each step), where unchecked LLM decisions could cause serious problems.

Embrace The Red
LlamaIndex Security Releases
LlamaIndex Security Releases
Dec 1, 2025

The Center for AI Safety launched an AI Dashboard that evaluates frontier AI models (the most advanced AI systems currently available) on capability and safety benchmarks, ranking them across text, vision, and risk categories. The Risk Index specifically measures how likely models are to exhibit dangerous behaviors like dual-use biology assistance (helping with harmful biological research), jailbreaking vulnerability (susceptibility to tricks that bypass safety features), overconfidence, deception, and harmful actions, with Claude Opus 4.5 currently scoring safest at 33.6 on a 0-100 scale (lower is safer). The dashboard also tracks industry progress toward broader automation milestones like AGI (artificial general intelligence, systems that can perform any intellectual task) and self-driving vehicles.

CAIS AI Safety Newsletter
Embrace The Red
Trail of Bits Blog

Fix: The source describes a technique for removing branches (conditional decision points) from cryptographic code using bit masking, two's complement, and XOR (exclusive OR, a logical operation) to perform both sides of a condition and then use a constant-time conditional swap based on the condition to obtain the correct result. However, the source does not provide a complete, production-ready solution—it only shows partial code examples and states they are 'Not secure -- DO NOT USE.' The source does not mention specific updates, patches, or versions that users should apply.

Trail of Bits Blog
LlamaIndex Security Releases
EU AI Act Updates
LlamaIndex Security Releases
Oct 29, 2025

A new benchmark called the Remote Labor Index (RLI) measures whether AI systems can automate real computer work tasks across different professions, showing that current AI agents can only fully automate 2.5% of projects despite improving over time. Additionally, over 50,000 people, including top scientists and Nobel laureates, signed an open letter calling for a moratorium (temporary ban) on developing superintelligence (a hypothetical AI system far more capable than humans) until it can be proven safe and controllable.

CAIS AI Safety Newsletter
Embrace The Red

Fix: The source explicitly mentions one security fix: 'Replace raw SQL string interpolation with proper SQLAlchemy parameterized APIs in PostgresKVStore' (llama-index-storage-kvstore-postgres #20104). Users should update to v0.14.6 to receive this and other bug fixes. No other specific mitigation steps are described in the release notes.

LlamaIndex Security Releases