aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 200/371
VIEW ALL
01

Canva gets to $4B in revenue as LLM referral traffic rises

industry
Feb 18, 2026

Canva, a design platform company, reached $4 billion in annual revenue by end of 2025, with growth driven partly by adoption of its AI tools. The company is shifting its strategy to position itself as an AI platform with design tools, and is focusing on getting traffic from LLMs (large language models, AI systems like ChatGPT that generate text) through integrations with chatbots and efforts to appear in LLM search results.

TechCrunch
02

SDkA: Synthetic Data Integrated k-Anonymity Model for Data Sharing With Improved Utility

securityprivacy
Feb 18, 2026

SDkA is a new privacy protection method that combines synthetic data (artificially generated data that mimics real data patterns) with k-anonymity (a technique that makes individuals unidentifiable by ensuring each person's data looks like at least k other people's data). The method uses a conditional generative adversarial network (a type of AI that learns to create realistic synthetic data) to improve data quality and quantity while keeping data useful, and adds selective generalization to k-anonymity to avoid over-hiding information.

IEEE Xplore (Security & AI Journals)
03

Practical Insights Into AI System Product Quality Evaluation

researchsafety
Feb 18, 2026

This research examines how ISO/IEC 25059 (an international standard for evaluating AI system quality) can be applied in practice, using an AI system that analyzes images of oil platform decks as a test case. The study highlights that when checking if AI systems work correctly, teams need to carefully define what counts as acceptable performance, especially for safety-critical applications (systems where failures could cause serious harm), and they should choose test cases (examples used to verify the system works) that realistically represent how the system will be used in the real world.

IEEE Xplore (Security & AI Journals)
04

India’s Sarvam wants to bring its AI models to feature phones, cars and smart glasses

industry
Feb 18, 2026

Sarvam, an Indian AI company, is deploying lightweight AI models on feature phones, cars, and smart glasses by using edge AI (running AI directly on devices rather than sending data to remote servers). The company's models require only megabytes of storage, work on existing phone processors, and can function offline, with partnerships including Nokia phones through HMD and car integration with Bosch.

TechCrunch
05

AI Found Twelve New Vulnerabilities in OpenSSL

researchsecurity
Feb 18, 2026

An AI system called AISLE discovered twelve previously unknown vulnerabilities (zero-day vulnerabilities, or security flaws unknown to software maintainers before disclosure) in OpenSSL, a widely-used cryptography library, with the findings announced in January 2026. The vulnerabilities were serious, including one with a CVSS score (a 0-10 severity rating) of 9.8 out of 10, and some had existed undetected for over 25 years despite extensive testing and audits. In five cases, the AI system also directly proposed patches that were accepted into the official OpenSSL release.

Schneier on Security
06

Microsoft says bug causes Copilot to summarize confidential emails

securityprivacy
Feb 18, 2026

Microsoft discovered a bug in Microsoft 365 Copilot (an AI assistant integrated into Office apps) that caused it to summarize confidential emails since late January, even though those emails had sensitivity labels (tags marking them as restricted) and data loss prevention policies (DLP, security rules that prevent sensitive data from leaving an organization) were set up to block this. A code error was allowing emails in Sent Items and Drafts folders to be processed by Copilot despite the confidentiality protections.

Fix: Microsoft began rolling out a fix in early February and continued monitoring the deployment as of the article date, reaching out to affected users to verify the fix was working.

BleepingComputer
07

Perplexity joins anti-ad camp as AI companies battle over trust and revenue 

industry
Feb 18, 2026

Perplexity, an AI search startup, is removing ads from its service because company leaders worry that users won't trust AI assistants that try to sell them things. This decision highlights a bigger challenge for the AI industry: major companies like OpenAI and Anthropic are trying different approaches to make money, with some adding ads while others avoid them completely.

The Verge (AI)
08

A new approach for GenAI risk protection

securitypolicy
Feb 18, 2026

Organizations face new security risks from generative AI (GenAI, AI systems that create text, images, and other content) tools like ChatGPT, Gemini, and Claude, where employees might accidentally upload sensitive data like personally identifiable information (PII, private details about individuals), protected health information (PHI, medical records), or company secrets. Traditional data loss prevention (DLP, tools that monitor and block sensitive data from leaving a company) solutions are expensive and difficult to manage, so most organizations have GenAI policies but lack the technology to enforce them.

Fix: The source describes two explicit approaches: Solution 1 involves implementing enterprise licenses for approved GenAI solutions (such as ChatGPT Enterprise or Microsoft CoPilot 365) which include built-in security and DLP controls, while also blocking non-approved GenAI tools using internet content filtering tools like Cisco's Umbrella, iBoss, DNSFilter, or WEB Titan. Solution 2 involves implementing GenAI DLP controls into an XDR/MDR (extended detection response/managed detection response, security platforms that combine endpoint, network, and threat intelligence monitoring) solution to detect, analyze, and respond to sensitive data loss risks.

CSO Online
09

The new paradigm for raising up secure software engineers

securitypolicy
Feb 18, 2026

As AI coding assistants rapidly increase developer productivity (with usage expected to jump from 14% to 90% by 2028), security teams face a growing challenge: more code is being produced faster with less time for review. Traditional developer security training focused on catching common code-level flaws like SQL injection (inserting malicious database commands into input fields) is becoming less critical, since AI tools and automated scanning will increasingly handle these line-by-line vulnerabilities, so security training needs to shift toward teaching developers to validate AI-generated code in its full deployment context and understand threat modeling (analyzing how systems could be attacked at an architectural level) rather than memorizing specific coding rules.

CSO Online
10

U.S. court bars OpenAI from using ‘Cameo’

policy
Feb 18, 2026

A federal court ruled that OpenAI must stop using the name 'Cameo' for its AI video generation feature in Sora 2 (a tool that creates videos with digital likenesses of users), finding the name too similar to Cameo's existing celebrity video platform and likely to confuse users. OpenAI had already renamed the feature to 'Characters' after a temporary restraining order in November, and the company disputes the ruling, arguing no one can claim exclusive ownership of the word 'cameo.'

TechCrunch
Prev1...198199200201202...371Next