aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3179 items

OpenClaw founder Peter Steinberger is joining OpenAI

infonews
industry
Feb 15, 2026

Peter Steinberger, the founder of OpenClaw (an AI agent, which is an AI system designed to complete tasks autonomously), has joined OpenAI. Sam Altman stated that Steinberger's expertise in getting multiple AI agents to work together will become important to OpenAI's future products, as the company believes the future will involve many agents collaborating.

The Verge (AI)

Starmer to extend online safety rules to AI chatbots after Grok scandal

infonews
policysafety

The AI trade has entered a puzzling phase. Do we know who the winners are anymore?

infonews
industry
Feb 15, 2026

N/A -- The provided content is a footer/navigation page from CNBC with no substantive information about AI or LLM-related topics. It contains only website links, legal notices, and subscription prompts, making it impossible to extract meaningful technical content to summarize.

I hate my AI pet with every fiber of my being

infonews
industry
Feb 15, 2026

A reviewer describes their negative experience with Moflin, Casio's AI-powered robotic pet, finding its constant noises and movements irritating despite its cute appearance and design for people who cannot own real pets. The article suggests that AI pet companions, while intended to provide companionship, may create frustration rather than the comfort they promise.

AI can’t make good video game worlds yet, and it might never be able to

infonews
industry
Feb 15, 2026

The article discusses how video game developers have long created games that generate their own worlds using programmed rules and parameters, such as Minecraft and Rogue, but suggests that generative AI (machine learning models that create new content) may struggle to replicate this capability effectively. The piece implies fundamental limitations in how AI can approach world-building compared to human developers' intentional design methods.

langchain-openrouter==0.0.2

infonews
security
Feb 15, 2026

This appears to be a navigation or header section from a GitHub page related to AI coding tools like GitHub Copilot and Spark, rather than a security issue or technical problem about the langchain-openrouter package.

langchain-anthropic==1.3.3

infonews
security
Feb 15, 2026

LangChain-Anthropic version 1.3.3 is a software release that includes several updates to how the library works with Anthropic's AI models. The updates add support for an "effort=max" parameter (which tells the AI to use maximum computational effort), fix an issue where extra spaces were being left at the end of AI responses, and introduce a new ContextOverflowError (an error that triggers when an AI receives too much text to process at once).

langchain-openai==1.1.9

lownews
security
Feb 15, 2026

LangChain's OpenAI integration released version 1.1.9, which fixes a bug where URLs in images weren't being properly cleaned up when the system counted how many tokens (units of text that an AI processes) were being used. The update also adds better error handling for when a prompt (input text to an AI) becomes too long to process.

langchain-core==1.2.13

infonews
security
Feb 15, 2026

This is a release announcement for langchain-core version 1.2.13, a software package that provides core functionality for building applications with language models. The release includes documentation improvements, a new OpenRouter provider package, and a code style update.

langchain-openrouter==0.0.1: feat(openrouter): add `langchain-openrouter` provider package (#35211)

infonews
security
Feb 15, 2026

LangChain added a new official package called langchain-openrouter that wraps the OpenRouter Python SDK (a library for accessing different AI models through one interface). This package, which includes a ChatOpenRouter component, handles capabilities that the existing ChatOpenAI component intentionally does not support.

No swiping involved: the AI dating apps promising to find your soulmate

infonews
industry
Feb 15, 2026

New AI-powered dating apps like Fate are emerging that use agentic AI (AI systems that can take actions and make decisions autonomously) and LLMs (large language models, the technology behind systems like ChatGPT) to match users based on personality similarity rather than superficial rankings, and some offer AI coaching to help users have better conversations. These startups aim to address problems with existing dating apps that use algorithmic ranking systems like Elo scores (ratings originally designed for chess) and are criticized for profiting by keeping users on the platform longer.

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

infonews
researchsafety

CVE-2026-23194: In the Linux kernel, the following vulnerability has been resolved: rust_binder: correctly handle FDA objects of length

infovulnerability
security
Feb 14, 2026
CVE-2026-23194

A bug in the Linux kernel's Rust implementation of Binder (a system for communication between processes) caused an out-of-bounds error when handling empty FDA objects (arrays of file descriptors with zero entries). The code incorrectly used a special value to mark certain operations, which conflicted with the valid case of an empty array, potentially allowing writes beyond the allocated memory buffer.

US military used Anthropic’s AI model Claude in Venezuela raid, report says

infonews
securitypolicy

It's been a big — but rocky — week for AI models from China. Here's what's happened

infonews
industry
Feb 14, 2026

Chinese tech companies Alibaba, ByteDance, and Kuaishou released new AI models this week that compete with Western AI tools in robotics and video generation. Alibaba's RynnBrain helps robots understand and interact with physical objects by tracking time and location, while ByteDance's Seedance 2.0 generates realistic videos from text prompts. However, ByteDance suspended Seedance's voice generation feature after concerns emerged that it was creating voices without the consent of the people whose images were used.

Anthropic's public benefit mission

infonews
policy
Feb 13, 2026

Anthropic is a public benefit corporation (a company legally structured to serve public interest, not just shareholders) that has stated its mission as developing AI responsibly for humanity's benefit. The company's official incorporation documents show this mission statement has remained consistent from 2021 to 2024, with only minor wording updates.

The evolution of OpenAI's mission statement

infonews
policyindustry

Anthropic got an 11% user boost from its OpenAI-bashing Super Bowl ad, data shows

infonews
industry
Feb 13, 2026

Anthropic's Super Bowl advertisement criticizing OpenAI's decision to add ads to ChatGPT resulted in an 11% increase in daily active users for Claude (Anthropic's chatbot), outperforming competing AI chatbots from OpenAI, Google, and Meta. The ad campaign reflects growing competition between AI companies as they vie for users and enterprise customers ahead of potential future public offerings.

GHSA-w5cr-2qhr-jqc5: Cloudflare Agents has a Reflected Cross-Site Scripting (XSS) vulnerability in AI Playground site

mediumvulnerability
security
Feb 13, 2026

A Reflected XSS vulnerability (reflected XSS, where malicious code is injected through a URL parameter and executed in a user's browser) was found in Cloudflare Agents' AI Playground OAuth callback handler. An attacker could craft a malicious link that, when clicked, steals user chat history, LLM interactions, and could control connected MCP Servers (tools that extend what an AI can do) on behalf of the victim.

Claude LLM artifacts abused to push Mac infostealers in ClickFix attack

highnews
security
Feb 13, 2026

Threat actors are abusing Claude artifacts (AI-generated content shared publicly on claude.ai) and Google Ads to trick macOS users into running malicious commands that install MacSync infostealer malware (software that steals sensitive data like passwords and crypto wallets). Over 10,000 users have viewed these fake guides disguised as legitimate tools like DNS resolvers or HomeBrew package managers.

Previous53 / 159Next
Feb 15, 2026

The UK government plans to extend online safety rules to AI chatbots, with makers of systems that endanger children facing fines or service blocks. This follows a scandal involving Elon Musk's Grok tool (an AI chatbot), which was stopped from generating sexualized images of real people in the UK after public pressure.

The Guardian Technology
CNBC Technology
The Verge (AI)
The Verge (AI)
LangChain Security Releases

Fix: Update to langchain-anthropic version 1.3.3, which includes fixes for trailing whitespace in assistant messages and support for the effort="max" parameter.

LangChain Security Releases

Fix: Update to langchain-openai version 1.1.9 or later. The fix for URL sanitization when counting image tokens is included in this release.

LangChain Security Releases
LangChain Security Releases
LangChain Security Releases
The Guardian Technology
Feb 15, 2026

Cognitive debt (the loss of shared understanding in developers' minds about how a system works) is becoming a bigger problem than technical debt (poorly written code) when using generative AI and agentic AI (AI systems that can take actions autonomously). Even if AI produces clean code, developers may lose track of why design decisions were made or how different parts connect, making it impossible to understand or modify the system confidently.

Simon Willison's Weblog

Fix: The bug was fixed by replacing the pattern of using `skip == 0` as a special marker value with a Rust enum instead. This change eliminates the ambiguity between a special marker value and the legitimate case of an empty FDA with zero-length skip.

NVD/CVE Database
Feb 14, 2026

According to the Wall Street Journal, Claude (an AI model made by Anthropic) was used by the US military in an operation in Venezuela involving airstrikes and resulting in 83 deaths. This violates Anthropic's terms of use, which explicitly forbid Claude from being used for violence, weapons development, or surveillance.

The Guardian Technology
CNBC Technology
Simon Willison's Weblog
Feb 13, 2026

This article tracks how OpenAI's official mission statement, filed annually with the IRS (the U.S. tax authority), changed between 2016 and 2024. Over time, OpenAI removed mentions of openly sharing capabilities, dropped the phrase "as a whole" from "benefit humanity," shifted from wanting to "help" build safe AI to committing to "develop and responsibly deploy" it themselves, and eventually cut the mission down to a single sentence focused on ensuring artificial general intelligence (AI systems designed to handle any task a human can) benefits all of humanity, while notably removing any mention of safety.

Simon Willison's Weblog
CNBC Technology

Fix: Agents-sdk users should upgrade to agents@0.3.10. Developers using configureOAuthCallback with custom error handling should ensure all user-controlled input is escaped (converted to safe text that won't be interpreted as code) before being inserted into HTML. See PR: https://github.com/cloudflare/agents/pull/841

GitHub Advisory Database

Fix: Users are recommended to exert caution and avoid executing in Terminal commands they don't fully understand. As noted by Kaspersky researchers, asking the chatbot in the same conversation about the safety of the provided commands is a straightforward way to determine if they're safe or not.

BleepingComputer