aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Industry News

New tools, products, platforms, funding rounds, and company developments in AI security.

to
Export CSV
1245 items

AI Found Twelve New Vulnerabilities in OpenSSL

infonews
researchsecurity
Feb 18, 2026

An AI system called AISLE discovered twelve previously unknown vulnerabilities (zero-day vulnerabilities, or security flaws unknown to software maintainers before disclosure) in OpenSSL, a widely-used cryptography library, with the findings announced in January 2026. The vulnerabilities were serious, including one with a CVSS score (a 0-10 severity rating) of 9.8 out of 10, and some had existed undetected for over 25 years despite extensive testing and audits. In five cases, the AI system also directly proposed patches that were accepted into the official OpenSSL release.

Schneier on Security

Microsoft says bug causes Copilot to summarize confidential emails

highnews
securityprivacy

Perplexity joins anti-ad camp as AI companies battle over trust and revenue 

infonews
industry
Feb 18, 2026

Perplexity, an AI search startup, is removing ads from its service because company leaders worry that users won't trust AI assistants that try to sell them things. This decision highlights a bigger challenge for the AI industry: major companies like OpenAI and Anthropic are trying different approaches to make money, with some adding ads while others avoid them completely.

A new approach for GenAI risk protection

infonews
securitypolicy

The new paradigm for raising up secure software engineers

infonews
securitypolicy

U.S. court bars OpenAI from using ‘Cameo’

infonews
policy
Feb 18, 2026

A federal court ruled that OpenAI must stop using the name 'Cameo' for its AI video generation feature in Sora 2 (a tool that creates videos with digital likenesses of users), finding the name too similar to Cameo's existing celebrity video platform and likely to confuse users. OpenAI had already renamed the feature to 'Characters' after a temporary restraining order in November, and the company disputes the ruling, arguing no one can claim exclusive ownership of the word 'cameo.'

More than 50% of enterprise software could switch to AI, Mistral CEO says

infonews
industry
Feb 18, 2026

Mistral AI's CEO argues that over 50% of enterprise software could be replaced by AI systems, particularly SaaS (software as a service, cloud-based programs that companies pay to use) products, as AI enables faster custom application development. However, he notes that 'systems of records' software (programs that store and manage an organization's critical data) will likely remain important, since they work alongside AI rather than compete with it.

Tech billionaires fly in for Delhi AI expo as Modi jostles to lead in south

infonews
policyindustry

Meta’s new deal with Nvidia buys up millions of AI chips

infonews
industry
Feb 17, 2026

Meta has signed a multiyear agreement with Nvidia to buy millions of processors (CPUs and GPUs, which are specialized chips for computing tasks) for its data centers that run AI systems. This deal includes Nvidia's Grace and Vera CPUs and Blackwell and Rubin GPUs, with plans to add next-generation Vera CPUs in 2027. Nvidia claims these chips will improve performance-per-watt (how much computing work gets done per unit of electricity used) in Meta's data centers.

Introducing Claude Sonnet 4.6

infonews
industry
Feb 17, 2026

Anthropic released Claude Sonnet 4.6, a new AI model that performs similarly to the more expensive Opus 4.5 while keeping Sonnet's cheaper pricing ($3 per million input tokens, $15 per million output tokens). The model has a knowledge cutoff (the date of information it was trained on) of August 2025 and supports up to 200,000 input tokens by default, with the option to use 1 million tokens in beta at higher cost.

Tesla adding Grok AI chatbot to its cars in the UK, Europe amid regulatory probes

infonews
safetypolicy

Cyber attacks enabled by basic failings, Palo Alto analysis finds

infonews
securityindustry

Google announces dates for I/O 2026

infonews
industry
Feb 17, 2026

Google has announced that Google I/O 2026, its annual developer conference, will be held May 19-20 in Mountain View, California, with both in-person and online attendance options. The company plans to showcase AI advances and product updates across its services, including Gemini (Google's AI assistant) and Android, through keynotes, demos, and interactive sessions.

Tech Life

infonews
industry
Feb 17, 2026

This BBC Radio program discusses engaging chatbots and AI chat technology, including conversations with NVIDIA about making AI sound more human and exploring emotional connections with AI. The episode also covers how new technology is assisting stroke survivors.

Tech Life

infonews
industry
Feb 17, 2026

A BBC program discusses engaging chatbots and interviews NVIDIA about AI chat technology, exploring how to make AI conversations sound more human and examining emotional connections between people and AI systems. The program also covers how new technology is assisting stroke survivors.

Anthropic releases Claude Sonnet 4.6, continuing breakneck pace of AI model releases

infonews
industry
Feb 17, 2026

Anthropic released Claude Sonnet 4.6, a new AI model that performs better at coding, computer use, and data processing tasks, making it the default option for free and paid users. This launch reflects the intense competition in the AI industry, with Anthropic releasing two major models in less than two weeks to keep pace with rivals like OpenAI and Google.

Figma partners with Anthropic to turn AI-generated code into editable designs

infonews
industry
Feb 17, 2026

Figma has partnered with Anthropic to launch a feature called 'Code to Canvas' that converts AI-generated code (from tools like Claude Code) into editable designs within Figma's platform. This allows teams to take working interfaces created by AI agents, refine them, compare options, and make design decisions together in Figma, bridging the gap between AI coding tools and design workflows.

WordPress’s new AI assistant will let users edit their sites with prompts

infonews
industry
Feb 17, 2026

WordPress has introduced a new AI assistant that lets users edit their websites by typing natural language requests (instructions written in plain English rather than code) instead of manually making changes. The AI can edit and translate text, generate and modify images, and adjust site elements like creating pages or changing fonts, accessible through the site editor sidebar and block notes feature (a commenting tool added in WordPress 6.9).

Researchers Show Copilot and Grok Can Be Abused as Malware C2 Proxies

highnews
securitysafety

Anthropic releases Sonnet 4.6

infonews
industry
Feb 17, 2026

Anthropic released Sonnet 4.6, an updated version of its mid-size AI model with improvements in coding, instruction-following, and computer use (the ability to interact with computer interfaces). The new model features a context window (the amount of text an AI can read and remember at once) of 1 million tokens, double the previous size, allowing it to process entire codebases or dozens of research papers in one request.

Previous40 / 63Next
Feb 18, 2026

Microsoft discovered a bug in Microsoft 365 Copilot (an AI assistant integrated into Office apps) that caused it to summarize confidential emails since late January, even though those emails had sensitivity labels (tags marking them as restricted) and data loss prevention policies (DLP, security rules that prevent sensitive data from leaving an organization) were set up to block this. A code error was allowing emails in Sent Items and Drafts folders to be processed by Copilot despite the confidentiality protections.

Fix: Microsoft began rolling out a fix in early February and continued monitoring the deployment as of the article date, reaching out to affected users to verify the fix was working.

BleepingComputer
The Verge (AI)
Feb 18, 2026

Organizations face new security risks from generative AI (GenAI, AI systems that create text, images, and other content) tools like ChatGPT, Gemini, and Claude, where employees might accidentally upload sensitive data like personally identifiable information (PII, private details about individuals), protected health information (PHI, medical records), or company secrets. Traditional data loss prevention (DLP, tools that monitor and block sensitive data from leaving a company) solutions are expensive and difficult to manage, so most organizations have GenAI policies but lack the technology to enforce them.

Fix: The source describes two explicit approaches: Solution 1 involves implementing enterprise licenses for approved GenAI solutions (such as ChatGPT Enterprise or Microsoft CoPilot 365) which include built-in security and DLP controls, while also blocking non-approved GenAI tools using internet content filtering tools like Cisco's Umbrella, iBoss, DNSFilter, or WEB Titan. Solution 2 involves implementing GenAI DLP controls into an XDR/MDR (extended detection response/managed detection response, security platforms that combine endpoint, network, and threat intelligence monitoring) solution to detect, analyze, and respond to sensitive data loss risks.

CSO Online
Feb 18, 2026

As AI coding assistants rapidly increase developer productivity (with usage expected to jump from 14% to 90% by 2028), security teams face a growing challenge: more code is being produced faster with less time for review. Traditional developer security training focused on catching common code-level flaws like SQL injection (inserting malicious database commands into input fields) is becoming less critical, since AI tools and automated scanning will increasingly handle these line-by-line vulnerabilities, so security training needs to shift toward teaching developers to validate AI-generated code in its full deployment context and understand threat modeling (analyzing how systems could be attacked at an architectural level) rather than memorizing specific coding rules.

CSO Online
TechCrunch
CNBC Technology
Feb 18, 2026

Tech billionaires from major AI companies like Google, Anthropic, and OpenAI are attending an AI summit in Delhi hosted by India's Prime Minister Narendra Modi, where leaders from developing countries are trying to gain influence over AI technology development. The week-long event brings together thousands of tech executives, government officials, and AI safety experts (people focused on making sure AI systems are safe and beneficial) from wealthy tech companies and poorer nations to discuss AI's future.

The Guardian Technology
The Verge (AI)
Simon Willison's Weblog
Feb 17, 2026

Tesla is adding Grok, an AI chatbot from Elon Musk's company xAI, to its vehicle infotainment systems (the dashboard computers that control entertainment and information) in the U.K. and nine other European markets. However, Grok has faced multiple regulatory investigations across Europe and Asia because it lacks safety guardrails, allowing users to create deepfake explicit images (fake videos or photos that look real but are computer-generated) of real people without consent, generate hate speech, and interact inappropriately with minors. Safety researchers also worry that adding chatbots to cars creates a "distraction layer" that could increase driver distraction while driving.

CNBC Technology
Feb 17, 2026

Cyberattacks are accelerating due to AI, with threat actors moving from initial system access to stealing data in as little as 72 minutes, but most successful attacks exploit basic security failures like weak authentication (verification of user identity), poor visibility into systems, and misconfigured security tools rather than sophisticated exploits. Identity management is a critical weakness, with excessive permissions affecting 99% of analyzed cloud accounts and identity-based attacks playing a role in 90% of incidents investigated.

Fix: Palo Alto Networks launched Unit 42 XSIAM 2.0 (an expanded managed SOC service, which is a Security Operations Center or team that monitors and responds to threats), which the company claims includes complete onboarding, threat hunting and response, and faster modeling of attack patterns compared to traditional SOCs.

CSO Online
The Verge (AI)
BBC Technology
BBC Technology
CNBC Technology
CNBC Technology
The Verge (AI)
Feb 17, 2026

Researchers discovered that AI assistants like Microsoft Copilot and Grok, which can browse the web and fetch URLs, can be abused as command-and-control (C2) proxies, a stealthy communication channel that lets attackers send commands to malware and receive data back while blending in with normal business communications. This technique, which requires the attacker to have already compromised a machine, works without needing API keys or accounts, making traditional security measures like key revocation ineffective. The attack demonstrates how AI tools can be weaponized beyond just generating malware, but also as intelligent intermediaries that help attackers adapt their strategies in real time based on information from the compromised system.

The Hacker News
TechCrunch