aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

[TOTAL_TRACKED]
2,727
[LAST_24H]
38
[LAST_7D]
173
Daily BriefingWednesday, April 1, 2026
>

Attack Surface Management Tools Now Using AI Agents: A new buying guide highlights that Cyber Asset Attack Surface Management (CAASM) and External Attack Surface Management (EASM) tools are increasingly using agentic AI (AI systems that can take independent actions) to automatically find and reduce security risks across a company's digital resources.

Latest Intel

page 104/273
VIEW ALL
01

CVE-2026-2654: A weakness has been identified in huggingface smolagents 1.24.0. Impacted is the function requests.get/requests.post of

security
Feb 18, 2026

A vulnerability called server-side request forgery (SSRF, where an attacker tricks a server into making unwanted web requests) was found in Hugging Face's smolagents version 1.24.0, specifically in the LocalPythonExecutor component's requests.get and requests.post functions. An attacker can exploit this remotely, and the vulnerability has been publicly disclosed, though the vendor did not respond when contacted.

Critical This Week5 issues
critical

CVE-2026-34162: FastGPT is an AI Agent building platform. Prior to version 4.14.9.5, the FastGPT HTTP tools testing endpoint (/api/core/

CVE-2026-34162NVD/CVE DatabaseMar 31, 2026
Mar 31, 2026
NVD/CVE Database
02

Canva gets to $4B in revenue as LLM referral traffic rises

industry
Feb 18, 2026

Canva, a design platform company, reached $4 billion in annual revenue by end of 2025, with growth driven partly by adoption of its AI tools. The company is shifting its strategy to position itself as an AI platform with design tools, and is focusing on getting traffic from LLMs (large language models, AI systems like ChatGPT that generate text) through integrations with chatbots and efforts to appear in LLM search results.

TechCrunch
03

SDkA: Synthetic Data Integrated k-Anonymity Model for Data Sharing With Improved Utility

securityprivacy
Feb 18, 2026

SDkA is a new privacy protection method that combines synthetic data (artificially generated data that mimics real data patterns) with k-anonymity (a technique that makes individuals unidentifiable by ensuring each person's data looks like at least k other people's data). The method uses a conditional generative adversarial network (a type of AI that learns to create realistic synthetic data) to improve data quality and quantity while keeping data useful, and adds selective generalization to k-anonymity to avoid over-hiding information.

IEEE Xplore (Security & AI Journals)
04

Practical Insights Into AI System Product Quality Evaluation

researchsafety
Feb 18, 2026

This research examines how ISO/IEC 25059 (an international standard for evaluating AI system quality) can be applied in practice, using an AI system that analyzes images of oil platform decks as a test case. The study highlights that when checking if AI systems work correctly, teams need to carefully define what counts as acceptable performance, especially for safety-critical applications (systems where failures could cause serious harm), and they should choose test cases (examples used to verify the system works) that realistically represent how the system will be used in the real world.

IEEE Xplore (Security & AI Journals)
05

India’s Sarvam wants to bring its AI models to feature phones, cars and smart glasses

industry
Feb 18, 2026

Sarvam, an Indian AI company, is deploying lightweight AI models on feature phones, cars, and smart glasses by using edge AI (running AI directly on devices rather than sending data to remote servers). The company's models require only megabytes of storage, work on existing phone processors, and can function offline, with partnerships including Nokia phones through HMD and car integration with Bosch.

TechCrunch
06

AI Found Twelve New Vulnerabilities in OpenSSL

researchsecurity
Feb 18, 2026

An AI system called AISLE discovered twelve previously unknown vulnerabilities (zero-day vulnerabilities, or security flaws unknown to software maintainers before disclosure) in OpenSSL, a widely-used cryptography library, with the findings announced in January 2026. The vulnerabilities were serious, including one with a CVSS score (a 0-10 severity rating) of 9.8 out of 10, and some had existed undetected for over 25 years despite extensive testing and audits. In five cases, the AI system also directly proposed patches that were accepted into the official OpenSSL release.

Schneier on Security
07

Microsoft says bug causes Copilot to summarize confidential emails

securityprivacy
Feb 18, 2026

Microsoft discovered a bug in Microsoft 365 Copilot (an AI assistant integrated into Office apps) that caused it to summarize confidential emails since late January, even though those emails had sensitivity labels (tags marking them as restricted) and data loss prevention policies (DLP, security rules that prevent sensitive data from leaving an organization) were set up to block this. A code error was allowing emails in Sent Items and Drafts folders to be processed by Copilot despite the confidentiality protections.

Fix: Microsoft began rolling out a fix in early February and continued monitoring the deployment as of the article date, reaching out to affected users to verify the fix was working.

BleepingComputer
08

Perplexity joins anti-ad camp as AI companies battle over trust and revenue 

industry
Feb 18, 2026

Perplexity, an AI search startup, is removing ads from its service because company leaders worry that users won't trust AI assistants that try to sell them things. This decision highlights a bigger challenge for the AI industry: major companies like OpenAI and Anthropic are trying different approaches to make money, with some adding ads while others avoid them completely.

The Verge (AI)
09

A new approach for GenAI risk protection

securitypolicy
Feb 18, 2026

Organizations face new security risks from generative AI (GenAI, AI systems that create text, images, and other content) tools like ChatGPT, Gemini, and Claude, where employees might accidentally upload sensitive data like personally identifiable information (PII, private details about individuals), protected health information (PHI, medical records), or company secrets. Traditional data loss prevention (DLP, tools that monitor and block sensitive data from leaving a company) solutions are expensive and difficult to manage, so most organizations have GenAI policies but lack the technology to enforce them.

Fix: The source describes two explicit approaches: Solution 1 involves implementing enterprise licenses for approved GenAI solutions (such as ChatGPT Enterprise or Microsoft CoPilot 365) which include built-in security and DLP controls, while also blocking non-approved GenAI tools using internet content filtering tools like Cisco's Umbrella, iBoss, DNSFilter, or WEB Titan. Solution 2 involves implementing GenAI DLP controls into an XDR/MDR (extended detection response/managed detection response, security platforms that combine endpoint, network, and threat intelligence monitoring) solution to detect, analyze, and respond to sensitive data loss risks.

CSO Online
10

The new paradigm for raising up secure software engineers

securitypolicy
Feb 18, 2026

As AI coding assistants rapidly increase developer productivity (with usage expected to jump from 14% to 90% by 2028), security teams face a growing challenge: more code is being produced faster with less time for review. Traditional developer security training focused on catching common code-level flaws like SQL injection (inserting malicious database commands into input fields) is becoming less critical, since AI tools and automated scanning will increasingly handle these line-by-line vulnerabilities, so security training needs to shift toward teaching developers to validate AI-generated code in its full deployment context and understand threat modeling (analyzing how systems could be attacked at an architectural level) rather than memorizing specific coding rules.

CSO Online
Prev1...102103104105106...273Next
critical

CVE-2025-15379: A command injection vulnerability exists in MLflow's model serving container initialization code, specifically in the `_

CVE-2025-15379NVD/CVE DatabaseMar 30, 2026
Mar 30, 2026
critical

CVE-2026-33873: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assis

CVE-2026-33873NVD/CVE DatabaseMar 27, 2026
Mar 27, 2026
critical

Attackers exploit critical Langflow RCE within hours as CISA sounds alarm

CSO OnlineMar 27, 2026
Mar 27, 2026
critical

CVE-2025-53521: F5 BIG-IP Unspecified Vulnerability

CVE-2025-53521CISA Known Exploited VulnerabilitiesMar 26, 2026
Mar 26, 2026