aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
38
[LAST_7D]
173
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 105/273
VIEW ALL
01

U.S. court bars OpenAI from using ‘Cameo’

policy
Feb 18, 2026

A federal court ruled that OpenAI must stop using the name 'Cameo' for its AI video generation feature in Sora 2 (a tool that creates videos with digital likenesses of users), finding the name too similar to Cameo's existing celebrity video platform and likely to confuse users. OpenAI had already renamed the feature to 'Characters' after a temporary restraining order in November, and the company disputes the ruling, arguing no one can claim exclusive ownership of the word 'cameo.'

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
TechCrunch
02

More than 50% of enterprise software could switch to AI, Mistral CEO says

industry
Feb 18, 2026

Mistral AI's CEO argues that over 50% of enterprise software could be replaced by AI systems, particularly SaaS (software as a service, cloud-based programs that companies pay to use) products, as AI enables faster custom application development. However, he notes that 'systems of records' software (programs that store and manage an organization's critical data) will likely remain important, since they work alongside AI rather than compete with it.

CNBC Technology
03

Tech billionaires fly in for Delhi AI expo as Modi jostles to lead in south

policyindustry
Feb 18, 2026

Tech billionaires from major AI companies like Google, Anthropic, and OpenAI are attending an AI summit in Delhi hosted by India's Prime Minister Narendra Modi, where leaders from developing countries are trying to gain influence over AI technology development. The week-long event brings together thousands of tech executives, government officials, and AI safety experts (people focused on making sure AI systems are safe and beneficial) from wealthy tech companies and poorer nations to discuss AI's future.

The Guardian Technology
04

Meta’s new deal with Nvidia buys up millions of AI chips

industry
Feb 17, 2026

Meta has signed a multiyear agreement with Nvidia to buy millions of processors (CPUs and GPUs, which are specialized chips for computing tasks) for its data centers that run AI systems. This deal includes Nvidia's Grace and Vera CPUs and Blackwell and Rubin GPUs, with plans to add next-generation Vera CPUs in 2027. Nvidia claims these chips will improve performance-per-watt (how much computing work gets done per unit of electricity used) in Meta's data centers.

The Verge (AI)
05

CVE-2021-22175: GitLab Server-Side Request Forgery (SSRF) Vulnerability

security
Feb 17, 2026

GitLab has a server-side request forgery vulnerability (SSRF, a flaw that allows attackers to make requests to internal networks on behalf of the server) that can be triggered when webhook functionality is enabled. This vulnerability is actively being exploited by attackers in the wild.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
06

Introducing Claude Sonnet 4.6

industry
Feb 17, 2026

Anthropic released Claude Sonnet 4.6, a new AI model that performs similarly to the more expensive Opus 4.5 while keeping Sonnet's cheaper pricing ($3 per million input tokens, $15 per million output tokens). The model has a knowledge cutoff (the date of information it was trained on) of August 2025 and supports up to 200,000 input tokens by default, with the option to use 1 million tokens in beta at higher cost.

Simon Willison's Weblog
07

Tesla adding Grok AI chatbot to its cars in the UK, Europe amid regulatory probes

safetypolicy
Feb 17, 2026

Tesla is adding Grok, an AI chatbot from Elon Musk's company xAI, to its vehicle infotainment systems (the dashboard computers that control entertainment and information) in the U.K. and nine other European markets. However, Grok has faced multiple regulatory investigations across Europe and Asia because it lacks safety guardrails, allowing users to create deepfake explicit images (fake videos or photos that look real but are computer-generated) of real people without consent, generate hate speech, and interact inappropriately with minors. Safety researchers also worry that adding chatbots to cars creates a "distraction layer" that could increase driver distraction while driving.

CNBC Technology
08

GHSA-8jpq-5h99-ff5r: OpenClaw has a local file disclosure via sendMediaFeishu in Feishu extension

security
Feb 17, 2026

The Feishu extension in OpenClaw had a vulnerability where the `sendMediaFeishu` function could be tricked into reading files directly from a computer's filesystem by treating attacker-controlled file paths as input. An attacker who could influence how the tool behaves (either directly or through prompt injection, where hidden instructions are hidden in the AI's input) could steal sensitive files like `/etc/passwd`.

Fix: Upgrade to OpenClaw version 2026.2.14 or newer. The fix removes direct local file reads and routes media loading through hardened helpers that enforce local-root restrictions.

GitHub Advisory Database
09

GHSA-g27f-9qjv-22pm: OpenClaw log poisoning (indirect prompt injection) via WebSocket headers

security
Feb 17, 2026

OpenClaw versions before 2026.2.13 logged WebSocket request headers (like Origin and User-Agent) without cleaning them up, allowing attackers to inject malicious text into logs. If those logs are later read by an LLM (large language model, an AI system that processes text) for tasks like debugging, the attacker's injected text could trick the AI into doing something unintended (a technique called indirect prompt injection or log poisoning).

Fix: Upgrade to `openclaw@2026.2.13` or later. Alternatively, if you cannot upgrade immediately, the source mentions two workarounds: treat logs as untrusted input when using AI-assisted debugging by sanitizing and escaping them, and do not auto-execute instructions derived from logs; or restrict gateway network access and apply reverse-proxy limits on header size.

GitHub Advisory Database
10

Cyber attacks enabled by basic failings, Palo Alto analysis finds

securityindustry
Feb 17, 2026

Cyberattacks are accelerating due to AI, with threat actors moving from initial system access to stealing data in as little as 72 minutes, but most successful attacks exploit basic security failures like weak authentication (verification of user identity), poor visibility into systems, and misconfigured security tools rather than sophisticated exploits. Identity management is a critical weakness, with excessive permissions affecting 99% of analyzed cloud accounts and identity-based attacks playing a role in 90% of incidents investigated.

Fix: Palo Alto Networks launched Unit 42 XSIAM 2.0 (an expanded managed SOC service, which is a Security Operations Center or team that monitors and responds to threats), which the company claims includes complete onboarding, threat hunting and response, and faster modeling of attack patterns compared to traditional SOCs.

CSO Online
Prev1...103104105106107...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026