aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1267 items

OpenClaw AI Runs Wild in Business Environments

mediumnews
securitysafety
Jan 30, 2026

OpenClaw AI, a popular open source AI assistant also known as ClawdBot or MoltBot, has become widely used but is raising security concerns because it operates with elevated privileges (special access rights that allow it to control more of a computer) and can act autonomously without waiting for user approval. The combination of unrestricted access and independent decision-making in business environments poses risks to system security and data safety.

Dark Reading

Celebrating our 2025 open-source contributions

infonews
industry
Jan 30, 2026

Trail of Bits engineers contributed over 375 pull requests to 90+ open-source projects in 2025, including work on cryptography libraries, the Rust compiler, and Ethereum tools. Rather than forking or locally patching dependencies when they encountered bugs or needed features, they contributed fixes upstream so the entire community could benefit. Key contributions included adding identity monitoring to Sigstore's Rekor (a transparency log for software signing), improving Rust's linting tools, developing a new ASN.1 API (a standard for encoding data structures) for Python's cryptography library, and optimizing the Ethereum Virtual Machine implementation.

Breaking the Sound Barrier, Part II: Exploiting CVE-2024-54529

infonews
security
Jan 30, 2026

CVE-2024-54529 is a type confusion vulnerability (where the code incorrectly assumes an object is a certain type without checking) in Apple's CoreAudio framework that allows attackers to crash the coreaudiod system daemon and potentially hijack control flow by manipulating pointer chains in memory. The vulnerability exists in the com.apple.audio.audiohald Mach service (a macOS inter-process communication system) where message handlers fetch objects without validating their actual type before performing operations on them.

Big tech results show investor demand for payoffs from heavy AI spending

infonews
industry
Jan 29, 2026

Big tech companies are under pressure from investors to show that their heavy spending on AI is producing real financial results and business growth. Meta's stock rose after demonstrating AI improvements in advertising, while Microsoft's stock fell despite its large AI investments, showing that investors will reward companies with strong returns but punish those that don't deliver clear benefits from their AI spending.

'Semantic Chaining' Jailbreak Dupes Gemini Nano Banana, Grok 4

lownews
securitysafety

From Quantum to AI Risks: Preparing for Cybersecurity's Future

infonews
securitypolicy

Tech Life

infonews
industry
Jan 27, 2026

China's DeepSeek AI tool, which caused significant market disruption when it launched a year ago, is now being adopted by an increasing number of US companies. The episode discusses this growing trend of Chinese AI technology being integrated into American business operations.

Beware: Government Using Image Manipulation for Propaganda

infonews
safetypolicy

EFF Statement on ICE and CBP Violence

infonews
policy
Jan 26, 2026

This statement describes how U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have conducted surveillance and violated constitutional rights, including facial recognition scanning and warrantless home searches. The document argues these violations are systemic problems, citing recent deaths during enforcement actions and a leaked memo allowing searches based on administrative warrants (warrants issued by agency officials rather than judges) without judicial review.

Search Engines, AI, And The Long Fight Over Fair Use

inforegulatory
policyresearch

The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time

infonews
securityresearch

Copyright Kills Competition

inforegulatory
policy
Jan 21, 2026

The article argues that stronger copyright laws, often promoted as protecting creators from big tech, actually concentrate power among large corporations and create barriers that prevent competition and innovation. In the AI context specifically, requiring developers to license training data would be so expensive that only the largest companies could afford to build AI models, reducing competition and ultimately harming consumers through higher costs and worse services.

v0.14.13

lownews
security
Jan 21, 2026

LlamaIndex version 0.14.13 is a release that includes multiple updates across its core library and integrations, featuring new capabilities like early stopping in agent workflows, token-based code splitting, and distributed data ingestion via RayIngestionPipeline. The release also includes several bug fixes, such as correcting error handling in aggregation functions and fixing async integration issues, plus security improvements that removed exposed API keys from notebook outputs.

Minting Next.js Authentication Cookies

infonews
security
Jan 15, 2026

An attacker who exploits a React2Shell vulnerability (a deserialization flaw allowing arbitrary code execution) in a Next.js application can steal the NEXTAUTH_SECRET environment variable and use it to mint forged authentication cookies, gaining persistent access as any user. The attacker only needs this one secret value to create valid session tokens because next-auth uses HKDF (HMAC-based Key Derivation Function, which derives encryption keys from a master secret) with predictable salt values based on cookie names.

A 0-click exploit chain for the Pixel 9 Part 2: Cracking the Sandbox with a Big Wave

infonews
security
Jan 14, 2026

A researcher discovered three bugs in the BigWave driver on Pixel 9 phones, including one that allows escaping the mediacodec sandbox (a restricted environment where apps run with limited permissions) to gain kernel arbitrary read/write access. The most dangerous bug is a use-after-free vulnerability (accessing memory that has already been freed), which occurs when a worker thread continues processing a job after the file descriptor managing it has been closed and its memory destroyed.

A 0-click exploit chain for the Pixel 9 Part 1: Decoding Dolby

infonews
security
Jan 14, 2026

Google's security team discovered a critical vulnerability (CVE-2025-54957) in the Dolby Unified Decoder, a library that processes audio formats on Android phones. The vulnerability is dangerous because AI features automatically decode incoming audio messages without user interaction, putting the decoder in the 0-click attack surface (meaning attackers can exploit it without users taking any action). Researchers demonstrated a complete exploit chain on the Pixel 9 that chains multiple vulnerabilities together to gain control of the device, highlighting how media decoder bugs can be practically weaponized on modern Android phones.

Lack of isolation in agentic browsers resurfaces old vulnerabilities

highnews
securitysafety

Agentic ProbLLMs: Exploiting AI Computer-Use And Coding Agents (39C3 Video + Slides)

infonews
securityresearch

Fighting Renewed Attempts to Make ISPs Copyright Cops: 2025 in Review

infonews
policy
Dec 30, 2025

A major copyright case is now before the Supreme Court, asking whether internet service providers (ISPs) must act as copyright enforcers by cutting off users' internet access based on accusations alone. A lower court ruled that ISPs could be held liable for copyright infringement by their customers, which could lead to entire households, schools, and libraries losing internet access due to one person's alleged infringement, especially harming low-income and underserved communities.

v0.14.12

lownews
security
Dec 29, 2025

This is a release of llama-index v0.14.12, a framework for building AI applications, containing various updates across multiple components including bug fixes, new features for asynchronous tool support, and improvements to integrations with services like OpenAI, Google, Anthropic, and various vector stores (databases that store numerical representations of data for AI searching). Key fixes address issues like crashes in logging, missing parameters in tool handling, and compatibility improvements for newer Python versions.

Previous49 / 64Next
Trail of Bits Blog
Google Project Zero
The Guardian Technology
Jan 29, 2026

Researchers discovered a jailbreak technique called semantic chaining that tricks certain LLMs (AI models trained on massive amounts of text) by breaking malicious requests into small, separate chunks that the model processes without understanding the overall harmful intent. This vulnerability affected models like Gemini Nano and Grok 4, which failed to recognize the dangerous purpose when instructions were split across multiple parts.

Dark Reading
Jan 29, 2026

Journalists highlight three major cybersecurity priorities: fixing known weaknesses in software, getting ready for quantum computing threats (powerful computers that could break current encryption), and improving how AI systems are built and used. The piece emphasizes that the cybersecurity industry needs to focus on these areas to stay ahead of emerging risks.

Dark Reading
BBC Technology
Jan 27, 2026

The White House digitally altered a photograph of an activist's arrest by darkening her skin and distorting her facial features to make her appear more distraught than in the original image posted by the Department of Homeland Security. AI detection tools confirmed the manipulation, raising concerns about how generative AI (systems that create images from text descriptions) and image editing technology can be misused by government to spread false information and reinforce racial stereotypes. The incident highlights the danger of deepfakes (realistic-looking fake media created with AI) and the importance of protecting citizens' right to independently document government actions.

EFF Deeplinks Blog

Fix: Congress must vote to reject any further funding of ICE and CBP, and rebuild the immigration enforcement system from the ground up to respect human rights and ensure real accountability for individual officers, their leadership, and the agency as a whole.

EFF Deeplinks Blog
Jan 23, 2026

This article argues that training AI models on copyrighted works should be protected as fair use (the legal right to use copyrighted material without permission for certain purposes like research or analysis), just as courts have previously allowed for search engines and other information technologies. The article contends that AI training is transformative because it extracts patterns from works rather than replacing them, and that expanding copyright restrictions on AI training could harm legitimate research practices in science and medicine.

EFF Deeplinks Blog
Jan 22, 2026

Attackers can use large language models (LLMs, AI systems trained on vast amounts of text to generate human-like responses) to create phishing pages that appear safe at first but transform into malicious sites after a victim visits them. The attack works by having a webpage secretly request the LLM to generate malicious JavaScript (code that runs in web browsers) using carefully crafted prompts that trick the AI into ignoring its safety rules, then assembling and running this code inside the victim's browser in real time. Because the malicious code is generated fresh each time and comes from trusted AI services, it bypasses traditional network security checks.

Fix: The source explicitly recommends runtime behavioral analysis to detect and block malicious activity at the point of execution within the browser. Palo Alto Networks customers are advised to use Advanced URL Filtering, Prisma AIRS, and Prisma Browser with Advanced Web Protection. Organizations are also encouraged to use the Unit 42 AI Security Assessment to help ensure safe AI use and development.

Palo Alto Unit 42
EFF Deeplinks Blog
LlamaIndex Security Releases

Fix: Ensure all secrets are rotated regularly, including the NEXTAUTH_SECRET or the newer AUTH_SECRET. The source also recommends these detection approaches: log the JWT ID on every session and alert on duplicates from different IP addresses; identify impossible travel by users; monitor for sessions without corresponding login events in auth logs; and watch for off-hours access or unusual user-agent strings.

Embrace The Red

Fix: Fixes were made available for all three bugs on January 5, 2026.

Google Project Zero

Fix: The vulnerabilities discussed in these posts were fixed as of January 5, 2026.

Google Project Zero
Jan 13, 2026

Agentic browsers (web browsers with embedded AI agents) lack proper isolation mechanisms, allowing attackers to exploit them in ways similar to cross-site scripting (XSS, where malicious code runs on websites you visit) and cross-site request forgery (CSRF, where attackers trick your browser into making unwanted requests). Because AI agents have access to the same sensitive data that users trust browsers with, like bank accounts and passwords, inadequate isolation between the AI agent and websites creates old security vulnerabilities that the web community thought it had solved decades ago.

Fix: The key recommendation for developers of agentic browsers is to extend the Same-Origin Policy (a security rule that keeps different websites' data separate in browsers) to AI agents, building on proven principles that successfully secured the web.

Trail of Bits Blog
Dec 31, 2025

This presentation covers security vulnerabilities found in agentic systems, which are AI agents (systems that can take actions autonomously) that can use computers and write code. The talk includes demonstrations of exploits discovered during the Month of AI Bugs, a security research initiative focused on finding bugs in AI systems.

Embrace The Red
EFF Deeplinks Blog
LlamaIndex Security Releases