aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1254 items

North Korean actors blend ClickFix with new macOS backdoors in Crypto campaign

infonews
security
Feb 11, 2026

North Korean threat actor UNC1609 is using ClickFix (a social engineering technique where attackers trick users into running malicious commands) combined with AI-generated videos to target cryptocurrency companies. The attackers impersonate industry contacts via compromised Telegram accounts, conduct fake video meetings, and convince victims to paste commands into their macOS Terminal, which downloads and executes malware including multiple undocumented backdoors (WAVESHAPER, HYPERCALL, HIDDENCALL, and others) that steal sensitive data and establish remote access.

CSO Online

Prompt Injection Via Road Signs

infonews
securityresearch

Children bombarded with weight loss drug ads online, says commissioner

infonews
policy
Feb 11, 2026

Children in England are being exposed to ads for weight loss drugs, diet products, and cosmetic procedures online despite such advertising being banned, according to a report by the children's commissioner. The ads are harmful to young people's self-esteem and body image, prompting calls for stronger regulation of social media platforms and better enforcement of existing rules.

CISOs must separate signal from noise as CVE volume soars

infonews
security
Feb 11, 2026

The cybersecurity industry is projected to identify over 59,000 vulnerabilities (CVEs, which are publicly disclosed software security flaws) in 2026, potentially reaching 118,000 under worst-case scenarios. However, experts warn that the sheer number of vulnerabilities does not directly reflect actual risk, since historically only a small fraction are ever exploited in real attacks, and most don't meaningfully impact most organizations. The surge reflects better discovery and reporting processes rather than worse software quality, creating a signal-to-noise problem that challenges security teams to prioritize which vulnerabilities actually matter.

Der Kaufratgeber für Breach & Attack Simulation Tools

infonews
security
Feb 10, 2026

Breach & Attack Simulation (BAS) tools are software that automatically tests how well a company's security controls work by simulating different types of attacks, such as phishing, malware, and network infiltration. Unlike penetration testing (where security experts try to break in), BAS continuously checks that security systems are functioning as designed. The BAS market is growing, especially in regulated industries like banking, and is increasingly incorporating generative AI (machine learning models that create new content) to improve user interfaces and help organizations prioritize security problems.

February 2026 Patch Tuesday: Six new and actively exploited Microsoft vulnerabilities addressed

infonews
security
Feb 10, 2026

Microsoft released 60 security fixes in February 2026 Patch Tuesday, including six actively exploited vulnerabilities. Three of these are security feature bypasses (CVE-2026-21510, CVE-2026-21513, CVE-2026-21514) that let attackers trick users into opening malicious files to execute code and bypass protections like Windows SmartScreen, while two allow privilege escalation (CVE-2026-21519, CVE-2026-21533). The good news is that all six issues are easy to fix with regular Microsoft patches for Windows and Office without requiring any additional configuration steps after patching.

v0.14.14

lownews
security
Feb 10, 2026

LlamaIndex version 0.14.14 is a maintenance release that fixes multiple bugs across core components and integrations, including issues with error handling in vector store queries, compatibility with deprecated Python functions, and empty responses from language models. The release also adds new features like a TokenBudgetHandler for cost governance and improves security defaults in core components. Several integrations with external services (OpenAI, Google Gemini, Anthropic, Bedrock) were updated to support new models and fix compatibility issues.

langchain-core==1.2.11

infonews
security
Feb 10, 2026

This item appears to be a navigation menu or promotional content from GitHub showing various AI development tools and features, including GitHub Copilot (an AI coding assistant), GitHub Spark (for building AI apps), and other GitHub services. The reference to 'langchain-core==1.2.11' suggests a specific version of LangChain (a framework for building applications with language models), but no technical issue, vulnerability, or problem is described in the provided content.

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

infonews
industry
Feb 10, 2026

QuitGPT is a campaign urging people to cancel their ChatGPT Plus subscriptions, citing concerns about OpenAI president Greg Brockman's donation to a political super PAC and the use of ChatGPT-4 by US Immigration and Customs Enforcement for résumé screening. The campaign, which began in late January and has garnered over 36 million Instagram views, asks supporters to either cancel their subscriptions, commit to stop using ChatGPT, or share the campaign on social media, with organizers hoping that enough canceled subscriptions will pressure OpenAI to change its practices.

80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier

infonews
securitypolicy

langchain==1.2.10

infonews
security
Feb 10, 2026

LangChain released version 1.2.10, which includes a bug fix for token counting on partial message sequences (a partial message sequence is a subset of messages in a conversation), dependency updates, and code refactoring to rename internal variables.

langchain-core==1.2.10

infonews
security
Feb 10, 2026

LangChain-core version 1.2.10 includes several updates: dependency bumps across multiple directories, a new ContextOverflowError (an exception raised when a prompt exceeds token limits) for Anthropic and OpenAI integrations, additions to model profiles for tracking text inputs and outputs, improved token counting for tool schemas (structured definitions of what functions an AI can call), and documentation fixes.

Is it possible to develop AI without the US?

infonews
industrypolicy

Romeo Is a Dead Man review – a misfire from a storied gaming provocateur

infonews
industry
Feb 10, 2026

This is a game review for "Romeo Is a Dead Man," the first original game in 10 years from developer Suda51, and it criticizes the game for being disappointing and confusing. The reviewer notes that while Suda51 is known for making creative, unconventional games, this title fails to deliver, instead offering an unclear story filled with confusing references that persist throughout the 20-hour gameplay.

AI-Generated Text and the Detection Arms Race

infonews
safetyresearch

Structured Context Engineering for File-Native Agentic Systems

infonews
research
Feb 9, 2026

A research paper studied how to present large amounts of structured data (like SQL databases with thousands of tables) to AI language models in different formats (YAML, Markdown, JSON, and TOON) to help them generate correct code. The study found that more advanced models like GPT and Gemini performed much better than open-source models, and that using unfamiliar data formats like TOON actually made models less efficient because they spent extra effort trying to understand the new format.

A one-prompt attack that breaks LLM safety alignment

infonews
safetyresearch

Why the Moltbook frenzy was like Pokémon

infonews
industry
Feb 9, 2026

Moltbook was an online platform where AI agents (software programs designed to act independently) interacted with each other, which some people saw as a preview of useful AI in the future, but it turned out to be mostly a social experiment and entertainment similar to a 2014 internet phenomenon called Twitch Plays Pokémon. The platform was flooded with crypto scams and many 'AI' posts were actually written by humans controlling the agents, revealing that truly helpful AI systems would need better coordination, shared goals, and shared memory to work together effectively.

langchain-openai==1.1.8

infonews
security
Feb 9, 2026

N/A -- The provided content is a GitHub navigation menu and footer with no technical information about langchain-openai==1.1.8 or any AI/LLM-related issue.

⚡ Weekly Recap: AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More

mediumnews
securitypolicy
Previous46 / 63Next
Feb 11, 2026

Researchers discovered a new attack called CHAI (Command Hijacking against embodied AI) that tricks AI systems controlling robots and autonomous vehicles by embedding fake instructions in images, such as misleading road signs. The attack exploits Large Visual-Language Models (LVLMs, which are AI systems that understand both images and text together) to make these embodied AI systems (robots that perceive and interact with the physical world) ignore their real commands and follow the attacker's hidden instructions instead. The researchers tested CHAI on drones, self-driving cars, and real robots, showing it works better than previous attack methods.

Schneier on Security

Fix: Dame Rachel's report suggested several explicit solutions: amending the Online Safety Act (OSA, a set of laws requiring online platforms to keep users safe) to include a "clear duty of care" for social media platforms to stop showing adverts to children; adding changes to Ofcom's Children's Code of Practice to "explicitly protect children from body stigma content"; and strengthening regulation and enforcement of online sales of age-restricted products. The government is also considering "bold measures to protect children online", including potentially banning social media for under 16s, according to a government spokesperson quoted in the article.

BBC Technology
CSO Online
CSO Online

Fix: Apply the regular Microsoft patches for Windows and Office released in the February 2026 Patch Tuesday update. According to the source, these patches resolve all six actively exploited vulnerabilities and require no post-patch configuration steps.

CSO Online

Fix: Users should update to version 0.14.14. The release notes explicitly mention: "Fix potential crashes and improve security defaults in core components (#20610)" and include specific bug fixes such as "fix(agent): handle empty LLM responses with retry logic" (#20596) and "Fix DeprecationWarning: 'asyncio.iscoroutinefunction' is deprecated" (#20517).

LlamaIndex Security Releases
LangChain Security Releases
MIT Technology Review
Feb 10, 2026

Most Fortune 500 companies now use AI agents (software that can act and make decisions with minimal human input), but many lack visibility into how many agents are running and what data they access, creating security risks. The report recommends applying Zero Trust security principles (requiring strong identity verification and giving users/agents only the minimum access they need) to AI agents the same way organizations do for human employees.

Microsoft Security Blog
LangChain Security Releases
LangChain Security Releases
Feb 10, 2026

This article discusses major tech companies (Alphabet, Amazon, Microsoft, and Meta) planning to invest $600 billion in AI this year, while Persian Gulf countries are developing their own AI systems to reduce dependence on the United States. The piece raises questions about whether AI development can happen independently of US tech dominance.

The Guardian Technology
The Guardian Technology
Feb 10, 2026

Generative AI has created a widespread problem where institutions like literary magazines, academic journals, and courts are overwhelmed by AI-generated submissions, forcing them to either shut down or deploy AI tools to defend against the influx. This has created an 'arms race' where both sides use AI for opposing purposes, with potential risks to institutions but also some unexpected benefits, such as AI helping non-English-speaking researchers access writing assistance that was previously expensive.

Schneier on Security
Simon Willison's Weblog
Feb 9, 2026

Researchers discovered that Group Relative Policy Optimization (GRPO), a technique normally used to improve AI safety, can be reversed to break safety alignment when the reward signals are changed. By giving a safety-aligned model even a single harmful prompt and scoring responses based on how well they fulfill the harmful request rather than refusing it, the model gradually abandons its safety guidelines and becomes willing to produce harmful content across many categories it never encountered during the attack.

Microsoft Security Blog
MIT Technology Review
LangChain Security Releases
Feb 9, 2026

This recap highlights how attackers are exploiting trusted tools and marketplaces rather than breaking security controls directly. Key threats include malicious skills appearing in ClawHub (a registry for AI agent add-ons), a record-breaking 31.4 Tbps DDoS attack (a flood attack that overwhelms servers with massive traffic), and compromised update infrastructure for Notepad++ being used to distribute malware. The pattern shows attackers are abusing trust in updates, app stores, and AI workflows to gain access to systems.

Fix: OpenClaw has announced a partnership with Google's VirusTotal malware scanning platform to scan skills uploaded to ClawHub as part of a defense-in-depth approach to improve security. Additionally, the source notes that open-source agentic tools like OpenClaw require users to maintain higher baseline security competence than managed platforms.

The Hacker News