aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3162 items

v0.14.15

infonews
security
Feb 18, 2026

This is a release notes document for LlamaIndex version 0.14.15 (dated February 18, 2026) containing updates across multiple components, including new multimodal (support for different types of content like text and images) features, support for additional AI models like Claude Sonnet 4.6, and various bug fixes across integrations with services like GitHub, SharePoint, and vector stores (databases that store data as numerical representations for AI searching).

LlamaIndex Security Releases

Anthropic is clashing with the Pentagon over AI use. Here's what each side wants

infonews
policy
Feb 18, 2026

Anthropic, an AI company with a $200 million Department of Defense contract, is in a dispute with the Pentagon over how its AI models can be used. Anthropic wants guarantees that its models won't be used for autonomous weapons (weapons that make decisions without human control) or mass surveillance of Americans, while the DOD wants unrestricted use for all lawful purposes. The disagreement has put their working relationship under review, and if Anthropic doesn't comply with the DOD's terms, it could be labeled a supply chain risk (a designation that would require other contractors to avoid using its products).

GHSA-x22m-j5qq-j49m: OpenClaw has two SSRF via sendMediaFeishu and markdown image fetching in Feishu extension

highvulnerability
security
Feb 18, 2026

The Feishu extension in OpenClaw had two SSRF vulnerabilities (SSRF is server-side request forgery, where an attacker tricks a server into making requests to internal systems it shouldn't access) that allowed attackers to fetch attacker-controlled URLs without protection. An attacker who could influence tool calls, including through prompt injection (tricking an AI by hiding instructions in its input), could potentially access internal services and re-upload responses as media.

GHSA-jfv4-h8mc-jcp8: OpenClaw: Process Safety - Unvalidated PID Kill via SIGKILL in Process Cleanup

mediumvulnerability
security
Feb 18, 2026

OpenClaw's process cleanup feature had a security flaw where it could accidentally kill unrelated processes on shared servers. The tool was terminating processes that matched certain patterns without checking if they actually belonged to the OpenClaw process, which meant other users' programs could be shut down by mistake.

GHSA-7rcp-mxpq-72pj: OpenClaw Chutes manual OAuth state validation bypass can cause credential substitution

mediumvulnerability
security
Feb 18, 2026

OpenClaw's manual OAuth login flow (a way to securely connect accounts using a third-party service) had a vulnerability where it didn't properly validate a security token called 'state', which could allow attackers to trick users into logging in with the wrong account. The automatic login flow was not affected by this issue.

GHSA-4564-pvr2-qq4h: OpenClaw: Prevent shell injection in macOS keychain credential write

highvulnerability
security
Feb 18, 2026

The Claude CLI tool on macOS had a shell injection vulnerability (a security flaw where attackers can run arbitrary commands) in how it stored authentication tokens in the system keychain. The problem occurred because user-controlled OAuth tokens were directly inserted into shell commands without proper protection, allowing an attacker to break out of the intended command and execute malicious code.

GHSA-xwjm-j929-xq7c: OpenClaw has a Path Traversal in Browser Download Functionality

mediumvulnerability
security
Feb 18, 2026
CVE-2026-26972

OpenClaw, a browser download tool, had a path traversal vulnerability (a security flaw where an attacker could use special characters like `../` to write files outside the intended folder) in its download feature because it didn't validate the output file path. This vulnerability only affected users with authenticated access to the CLI or gateway RPC token (a special permission token), not regular AI agent users.

Google DeepMind wants to know if chatbots are just virtue signaling

infonews
researchsafety

Google’s AI music maker is coming to the Gemini app

infonews
industry
Feb 18, 2026

Google has added Lyria 3, an AI music generation model from DeepMind, to its Gemini chatbot app, allowing users to create 30-second music tracks by describing genres, moods, or providing images and videos as input. The feature is now available in beta across multiple languages globally to users aged 18 and older.

Google adds music-generation capabilities to the Gemini app

infonews
industry
Feb 18, 2026

Google has added music generation to its Gemini app using DeepMind's Lyria 3 model, which lets users create 30-second songs by describing what they want. The feature includes safeguards like SynthID watermarks (digital markers that identify AI-generated content) and filters to prevent mimicking existing artists, plus the ability for users to upload tracks and ask Gemini whether they are AI-generated.

Kana emerges from stealth with $15M to build flexible AI agents for marketers

infonews
industry
Feb 18, 2026

Kana, a new marketing AI startup, has raised $15 million to build AI agents (software systems that can independently perform tasks) that help marketers with data analysis, campaign management, and audience targeting. The platform uses "loosely coupled" agents (modular AI components that work independently but can be connected together) that can be customized in real time and integrated into existing marketing software, while keeping humans involved to approve and adjust the AI's actions.

Microsoft says Office bug exposed customers’ confidential emails to Copilot AI

highnews
securityprivacy

OpenAI pushes into higher education as India seeks to scale AI skills

infonews
industry
Feb 18, 2026

OpenAI is partnering with six major Indian universities and academic institutions to integrate AI tools like ChatGPT into teaching and research, aiming to reach over 100,000 students, faculty, and staff within a year. The initiative focuses on embedding AI into core academic functions such as coding and research rather than just providing standalone tool access, and includes faculty training and responsible-use frameworks. This move reflects broader competition among AI companies to shape how AI is taught and adopted in India, one of the world's largest education systems and ChatGPT's second-largest user base after the U.S.

CVE-2026-2654: A weakness has been identified in huggingface smolagents 1.24.0. Impacted is the function requests.get/requests.post of

mediumvulnerability
security
Feb 18, 2026
CVE-2026-2654

A vulnerability called server-side request forgery (SSRF, where an attacker tricks a server into making unwanted web requests) was found in Hugging Face's smolagents version 1.24.0, specifically in the LocalPythonExecutor component's requests.get and requests.post functions. An attacker can exploit this remotely, and the vulnerability has been publicly disclosed, though the vendor did not respond when contacted.

Canva gets to $4B in revenue as LLM referral traffic rises

infonews
industry
Feb 18, 2026

Canva, a design platform company, reached $4 billion in annual revenue by end of 2025, with growth driven partly by adoption of its AI tools. The company is shifting its strategy to position itself as an AI platform with design tools, and is focusing on getting traffic from LLMs (large language models, AI systems like ChatGPT that generate text) through integrations with chatbots and efforts to appear in LLM search results.

Practical Insights Into AI System Product Quality Evaluation

inforesearchPeer-Reviewed
research

Unleashing the Power of Artificial Intelligence for Exploring Unrevealed and Unexplored Natural Resources

inforesearchPeer-Reviewed
research

SDkA: Synthetic Data Integrated k-Anonymity Model for Data Sharing With Improved Utility

inforesearchPeer-Reviewed
security

Service Mesh: The Rise of Event-Driven Asynchronous Mesh in Cloud Continuum

inforesearchPeer-Reviewed
research

Two Technology Wheels of Fortune

inforesearchPeer-Reviewed
industry
Previous48 / 159Next
CNBC Technology

Fix: Upgrade to OpenClaw version 2026.2.14 or newer. The fix routes Feishu remote media fetching through hardened runtime helpers that enforce SSRF policies and size limits.

GitHub Advisory Database

Fix: Update to version 2026.2.14 or later. The fix filters processes to only direct child processes (by checking that `ppid == process.pid` before sending termination signals). Additional improvements include using graceful termination first (`SIGTERM`, then `SIGKILL` as a fallback), using wider process output (`ps -axww`) to avoid truncation issues, and tightening pattern matching to avoid substring matches.

GitHub Advisory Database

Fix: The manual flow now requires the full redirect URL (must include both the authorization code and state parameter), validates the returned state against the expected value, and rejects code-only pastes. This fix is available in openclaw version 2026.2.14 and later (commit a99ad11a4107ba8eac58f54a3c1a8a0cf5686f47).

GitHub Advisory Database

Fix: Update to version 2026.2.14 or later. The fix avoids invoking a shell by using `execFileSync("security", argv)` and passing the updated keychain payload as a literal argument instead of constructing a shell command string.

GitHub Advisory Database

Fix: Upgrade to `openclaw` version 2026.2.13 or later. The fix restricts the `path` parameter to the default download directory using `resolvePathWithinRoot` in the gateway browser control routes `/wait/download` and `/download`.

GitHub Advisory Database
Feb 18, 2026

Researchers at Google DeepMind are investigating whether chatbots display genuine moral reasoning or are simply mimicking responses (virtue signaling). While studies show that large language models (LLMs, AI systems trained on massive amounts of text data) can give morally sound advice, the models are unreliable in practice because they often flip their answers when questioned, change responses based on how questions are formatted, and show sensitivity to tiny changes like swapping option labels from 'Case 1' to '(A)'. The researchers propose developing more rigorous evaluation methods to test whether moral behavior in LLMs is actually robust or just performative.

Fix: The source proposes a new line of research to develop more rigorous techniques for evaluating moral competence in LLMs. This would include tests designed to push models to change their responses to moral questions to reveal if they lack robust moral reasoning, and tests presenting models with variations of common moral problems to check whether they produce rote responses or more nuanced ones. However, the source notes this is "more a wish list than a set of ready-made solutions" and does not describe implemented fixes or updates.

MIT Technology Review
The Verge (AI)

Fix: Google has implemented SynthID watermarks to identify AI-generated music and added filters to check outputs against existing content to prevent artist mimicry. The company is also adding capabilities within Gemini to identify AI-generated music, allowing users to upload tracks and ask if they are AI-generated.

TechCrunch
TechCrunch
Feb 18, 2026

Microsoft discovered a bug that allowed Copilot (an AI chat feature in Office software) to read and summarize customers' confidential emails without permission for several weeks, even when data loss prevention policies (rules meant to block sensitive information from being sent to AI systems) were in place. The bug affected emails labeled as confidential and was tracked internally as CW1226324.

Fix: Microsoft said it began rolling out a fix for the bug earlier in February.

TechCrunch (Security)
TechCrunch
NVD/CVE Database
TechCrunch
safety
Feb 18, 2026

This research examines how ISO/IEC 25059 (an international standard for evaluating AI system quality) can be applied in practice, using an AI system that analyzes images of oil platform decks as a test case. The study highlights that when checking if AI systems work correctly, teams need to carefully define what counts as acceptable performance, especially for safety-critical applications (systems where failures could cause serious harm), and they should choose test cases (examples used to verify the system works) that realistically represent how the system will be used in the real world.

IEEE Xplore (Security & AI Journals)
Feb 18, 2026

This article discusses how AI (artificial intelligence) can improve the process of finding natural resources like minerals and energy sources that haven't been discovered yet. AI uses techniques such as machine learning (systems that improve through experience), computer vision (technology that helps machines understand images), and generative models (AI that can create new content) combined with remote sensing tools to make resource exploration faster, safer, and less damaging to the environment.

IEEE Xplore (Security & AI Journals)
privacy
Feb 18, 2026

SDkA is a new privacy protection method that combines synthetic data (artificially generated data that mimics real data patterns) with k-anonymity (a technique that makes individuals unidentifiable by ensuring each person's data looks like at least k other people's data). The method uses a conditional generative adversarial network (a type of AI that learns to create realistic synthetic data) to improve data quality and quantity while keeping data useful, and adds selective generalization to k-anonymity to avoid over-hiding information.

IEEE Xplore (Security & AI Journals)
Feb 18, 2026

Modern cloud applications use many small services (microservices) that are complex to manage, so service meshes help control and coordinate them. Event meshes improve on this by allowing services to communicate asynchronously (services don't wait for immediate responses) using events (messages triggered when something happens), which makes distributed systems (applications spread across multiple locations) more reliable and easier to observe and secure.

IEEE Xplore (Security & AI Journals)
Feb 18, 2026

Modern companies increasingly depend on AI and emerging technologies, making nearly every business a technology company in some way. Business leaders need to understand how these technologies work at a basic level to successfully guide their companies through digital transformation (the shift to using digital tools and processes). Without this knowledge, executives cannot predict how AI and other technologies will affect their organizations.

IEEE Xplore (Security & AI Journals)