aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Browse All

All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.

to
Export CSV
3162 items

India’s Sarvam wants to bring its AI models to feature phones, cars and smart glasses

infonews
industry
Feb 18, 2026

Sarvam, an Indian AI company, is deploying lightweight AI models on feature phones, cars, and smart glasses by using edge AI (running AI directly on devices rather than sending data to remote servers). The company's models require only megabytes of storage, work on existing phone processors, and can function offline, with partnerships including Nokia phones through HMD and car integration with Bosch.

TechCrunch

Keenadu: Android malware that comes preinstalled and can’t be removed by users

infonews
security
Feb 18, 2026

Keenadu is an Android malware that arrives preinstalled on devices through compromised firmware (the core system software that runs before the operating system), giving attackers deep control before users even finish setup. Because it embeds itself at the firmware level with elevated privileges (high-level system access), standard removal methods don't work, and it can steal biometric data, messages, banking credentials, and monitor browser searches. The malware has infected over 13,000 devices across multiple countries and can also spread through seemingly harmless apps in app stores.

AI Found Twelve New Vulnerabilities in OpenSSL

infonews
researchsecurity

Microsoft says bug causes Copilot to summarize confidential emails

highnews
securityprivacy

Perplexity joins anti-ad camp as AI companies battle over trust and revenue 

infonews
industry
Feb 18, 2026

Perplexity, an AI search startup, is removing ads from its service because company leaders worry that users won't trust AI assistants that try to sell them things. This decision highlights a bigger challenge for the AI industry: major companies like OpenAI and Anthropic are trying different approaches to make money, with some adding ads while others avoid them completely.

A new approach for GenAI risk protection

infonews
securitypolicy

The new paradigm for raising up secure software engineers

infonews
securitypolicy

U.S. court bars OpenAI from using ‘Cameo’

infonews
policy
Feb 18, 2026

A federal court ruled that OpenAI must stop using the name 'Cameo' for its AI video generation feature in Sora 2 (a tool that creates videos with digital likenesses of users), finding the name too similar to Cameo's existing celebrity video platform and likely to confuse users. OpenAI had already renamed the feature to 'Characters' after a temporary restraining order in November, and the company disputes the ruling, arguing no one can claim exclusive ownership of the word 'cameo.'

More than 50% of enterprise software could switch to AI, Mistral CEO says

infonews
industry
Feb 18, 2026

Mistral AI's CEO argues that over 50% of enterprise software could be replaced by AI systems, particularly SaaS (software as a service, cloud-based programs that companies pay to use) products, as AI enables faster custom application development. However, he notes that 'systems of records' software (programs that store and manage an organization's critical data) will likely remain important, since they work alongside AI rather than compete with it.

Tech billionaires fly in for Delhi AI expo as Modi jostles to lead in south

infonews
policyindustry

Meta’s new deal with Nvidia buys up millions of AI chips

infonews
industry
Feb 17, 2026

Meta has signed a multiyear agreement with Nvidia to buy millions of processors (CPUs and GPUs, which are specialized chips for computing tasks) for its data centers that run AI systems. This deal includes Nvidia's Grace and Vera CPUs and Blackwell and Rubin GPUs, with plans to add next-generation Vera CPUs in 2027. Nvidia claims these chips will improve performance-per-watt (how much computing work gets done per unit of electricity used) in Meta's data centers.

CVE-2026-22769: Dell RecoverPoint for Virtual Machines (RP4VMs) Use of Hard-coded Credentials Vulnerability

infovulnerability
security
Feb 17, 2026
CVE-2026-22769EPSS: 34.2%

CVE-2021-22175: GitLab Server-Side Request Forgery (SSRF) Vulnerability

highvulnerability
security
Feb 17, 2026
CVE-2021-22175EPSS: 73.5%🔥 Actively Exploited

Introducing Claude Sonnet 4.6

infonews
industry
Feb 17, 2026

Anthropic released Claude Sonnet 4.6, a new AI model that performs similarly to the more expensive Opus 4.5 while keeping Sonnet's cheaper pricing ($3 per million input tokens, $15 per million output tokens). The model has a knowledge cutoff (the date of information it was trained on) of August 2025 and supports up to 200,000 input tokens by default, with the option to use 1 million tokens in beta at higher cost.

Tesla adding Grok AI chatbot to its cars in the UK, Europe amid regulatory probes

infonews
safetypolicy

GHSA-8jpq-5h99-ff5r: OpenClaw has a local file disclosure via sendMediaFeishu in Feishu extension

highvulnerability
security
Feb 17, 2026
CVE-2026-26321

The Feishu extension in OpenClaw had a vulnerability where the `sendMediaFeishu` function could be tricked into reading files directly from a computer's filesystem by treating attacker-controlled file paths as input. An attacker who could influence how the tool behaves (either directly or through prompt injection, where hidden instructions are hidden in the AI's input) could steal sensitive files like `/etc/passwd`.

GHSA-g27f-9qjv-22pm: OpenClaw log poisoning (indirect prompt injection) via WebSocket headers

lowvulnerability
security
Feb 17, 2026

OpenClaw versions before 2026.2.13 logged WebSocket request headers (like Origin and User-Agent) without cleaning them up, allowing attackers to inject malicious text into logs. If those logs are later read by an LLM (large language model, an AI system that processes text) for tasks like debugging, the attacker's injected text could trick the AI into doing something unintended (a technique called indirect prompt injection or log poisoning).

Cyber attacks enabled by basic failings, Palo Alto analysis finds

infonews
securityindustry

Google announces dates for I/O 2026

infonews
industry
Feb 17, 2026

Google has announced that Google I/O 2026, its annual developer conference, will be held May 19-20 in Mountain View, California, with both in-person and online attendance options. The company plans to showcase AI advances and product updates across its services, including Gemini (Google's AI assistant) and Android, through keynotes, demos, and interactive sessions.

Tech Life

infonews
industry
Feb 17, 2026

A BBC program discusses engaging chatbots and interviews NVIDIA about AI chat technology, exploring how to make AI conversations sound more human and examining emotional connections between people and AI systems. The program also covers how new technology is assisting stroke survivors.

Previous49 / 159Next
CSO Online
Feb 18, 2026

An AI system called AISLE discovered twelve previously unknown vulnerabilities (zero-day vulnerabilities, or security flaws unknown to software maintainers before disclosure) in OpenSSL, a widely-used cryptography library, with the findings announced in January 2026. The vulnerabilities were serious, including one with a CVSS score (a 0-10 severity rating) of 9.8 out of 10, and some had existed undetected for over 25 years despite extensive testing and audits. In five cases, the AI system also directly proposed patches that were accepted into the official OpenSSL release.

Schneier on Security
Feb 18, 2026

Microsoft discovered a bug in Microsoft 365 Copilot (an AI assistant integrated into Office apps) that caused it to summarize confidential emails since late January, even though those emails had sensitivity labels (tags marking them as restricted) and data loss prevention policies (DLP, security rules that prevent sensitive data from leaving an organization) were set up to block this. A code error was allowing emails in Sent Items and Drafts folders to be processed by Copilot despite the confidentiality protections.

Fix: Microsoft began rolling out a fix in early February and continued monitoring the deployment as of the article date, reaching out to affected users to verify the fix was working.

BleepingComputer
The Verge (AI)
Feb 18, 2026

Organizations face new security risks from generative AI (GenAI, AI systems that create text, images, and other content) tools like ChatGPT, Gemini, and Claude, where employees might accidentally upload sensitive data like personally identifiable information (PII, private details about individuals), protected health information (PHI, medical records), or company secrets. Traditional data loss prevention (DLP, tools that monitor and block sensitive data from leaving a company) solutions are expensive and difficult to manage, so most organizations have GenAI policies but lack the technology to enforce them.

Fix: The source describes two explicit approaches: Solution 1 involves implementing enterprise licenses for approved GenAI solutions (such as ChatGPT Enterprise or Microsoft CoPilot 365) which include built-in security and DLP controls, while also blocking non-approved GenAI tools using internet content filtering tools like Cisco's Umbrella, iBoss, DNSFilter, or WEB Titan. Solution 2 involves implementing GenAI DLP controls into an XDR/MDR (extended detection response/managed detection response, security platforms that combine endpoint, network, and threat intelligence monitoring) solution to detect, analyze, and respond to sensitive data loss risks.

CSO Online
Feb 18, 2026

As AI coding assistants rapidly increase developer productivity (with usage expected to jump from 14% to 90% by 2028), security teams face a growing challenge: more code is being produced faster with less time for review. Traditional developer security training focused on catching common code-level flaws like SQL injection (inserting malicious database commands into input fields) is becoming less critical, since AI tools and automated scanning will increasingly handle these line-by-line vulnerabilities, so security training needs to shift toward teaching developers to validate AI-generated code in its full deployment context and understand threat modeling (analyzing how systems could be attacked at an architectural level) rather than memorizing specific coding rules.

CSO Online
TechCrunch
CNBC Technology
Feb 18, 2026

Tech billionaires from major AI companies like Google, Anthropic, and OpenAI are attending an AI summit in Delhi hosted by India's Prime Minister Narendra Modi, where leaders from developing countries are trying to gain influence over AI technology development. The week-long event brings together thousands of tech executives, government officials, and AI safety experts (people focused on making sure AI systems are safe and beneficial) from wealthy tech companies and poorer nations to discuss AI's future.

The Guardian Technology
The Verge (AI)
🔥 Actively Exploited

Dell RecoverPoint for Virtual Machines (RP4VMs) has a vulnerability where passwords are hard-coded (built directly into the software rather than created by users), allowing attackers without authorization to remotely access the system and gain root-level persistence (permanent control of the computer). This vulnerability is currently being actively exploited by attackers.

Fix: Apply mitigations per vendor instructions (see Dell support documentation at https://www.dell.com/support/kbdoc/en-us/000426773/dsa-2026-079 and https://www.dell.com/support/kbdoc/en-us/000426742/recoverpoint-for-vms-apply-the-remediation-script-for-dsa), follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable. Due date: 2026-02-21.

CISA Known Exploited Vulnerabilities

GitLab has a server-side request forgery vulnerability (SSRF, a flaw that allows attackers to make requests to internal networks on behalf of the server) that can be triggered when webhook functionality is enabled. This vulnerability is actively being exploited by attackers in the wild.

Fix: Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.

CISA Known Exploited Vulnerabilities
Simon Willison's Weblog
Feb 17, 2026

Tesla is adding Grok, an AI chatbot from Elon Musk's company xAI, to its vehicle infotainment systems (the dashboard computers that control entertainment and information) in the U.K. and nine other European markets. However, Grok has faced multiple regulatory investigations across Europe and Asia because it lacks safety guardrails, allowing users to create deepfake explicit images (fake videos or photos that look real but are computer-generated) of real people without consent, generate hate speech, and interact inappropriately with minors. Safety researchers also worry that adding chatbots to cars creates a "distraction layer" that could increase driver distraction while driving.

CNBC Technology

Fix: Upgrade to OpenClaw version 2026.2.14 or newer. The fix removes direct local file reads and routes media loading through hardened helpers that enforce local-root restrictions.

GitHub Advisory Database

Fix: Upgrade to `openclaw@2026.2.13` or later. Alternatively, if you cannot upgrade immediately, the source mentions two workarounds: treat logs as untrusted input when using AI-assisted debugging by sanitizing and escaping them, and do not auto-execute instructions derived from logs; or restrict gateway network access and apply reverse-proxy limits on header size.

GitHub Advisory Database
Feb 17, 2026

Cyberattacks are accelerating due to AI, with threat actors moving from initial system access to stealing data in as little as 72 minutes, but most successful attacks exploit basic security failures like weak authentication (verification of user identity), poor visibility into systems, and misconfigured security tools rather than sophisticated exploits. Identity management is a critical weakness, with excessive permissions affecting 99% of analyzed cloud accounts and identity-based attacks playing a role in 90% of incidents investigated.

Fix: Palo Alto Networks launched Unit 42 XSIAM 2.0 (an expanded managed SOC service, which is a Security Operations Center or team that monitors and responds to threats), which the company claims includes complete onboarding, threat hunting and response, and faster modeling of attack patterns compared to traditional SOCs.

CSO Online
The Verge (AI)
BBC Technology