aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDatasetFor devs
Subscribe
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

AI Sec Watch

The security intelligence platform for AI teams

AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.

Independent research. No sponsors, no paywalls, no conflicts of interest.

[TOTAL_TRACKED]
3,710
[LAST_24H]
1
[LAST_7D]
1
Daily BriefingSunday, May 17, 2026

No new AI/LLM security issues were identified today.

Latest Intel

page 229/371
VIEW ALL
01

The Impact of Artificial Intelligence in Protecting the Online Social Community From Cyberbullying

researchsafety
Dec 22, 2025

Cyberbullying on social media is a growing problem that harms people's mental health, and traditional methods to stop it are no longer effective. This study examines how artificial intelligence can help protect online communities from cyberbullying by exploring different AI technologies, their uses, and the challenges involved. The goal is to understand how AI might create safer online environments.

IEEE Xplore (Security & AI Journals)
02

Generative Artificial Intelligence: Ethical Challenges and Trust Mechanisms

researchsafety
Dec 22, 2025

Generative AI (systems that create new text, images, or other content) is transforming many industries but raises ethical concerns like data privacy (protecting personal information), bias (unfair treatment of certain groups), transparency (being open about how the AI works), and accountability (responsibility for the AI's actions). Researchers propose a trust framework based on transparency, fairness, accountability, and privacy to help ensure generative AI is developed and used responsibly.

IEEE Xplore (Security & AI Journals)
03

Large Language Models in Human Subject Research, and the Presence of Idiosyncratic Human Behaviors

researchsafety
Dec 22, 2025

Large language models (LLMs, AI systems trained on huge amounts of text to generate human-like responses) can now mimic not just general human language but also unusual, individual-specific human behaviors. This ability could lead to LLMs being used more widely in research studies and potentially reduce the role of actual humans, which raises concerns about AI alignment (ensuring AI systems behave in ways humans intend and approve of) and how this technology affects society.

IEEE Xplore (Security & AI Journals)
04

Slack Federated Adversarial Training

researchsecurity
Dec 22, 2025

This research addresses a problem in federated learning (a method where multiple computers train an AI model together without sharing raw data) combined with adversarial training (a technique that makes AI models resistant to intentionally tricky inputs). The authors found that simply combining these two approaches causes the model's accuracy to drop because adversarial training increases differences in the data across different computers, making the federated learning less effective. They propose SFAT (Slack Federated Adversarial Training), which uses a relaxation mechanism to adjust how the computers combine their learning results, reducing the harmful effects of data differences and improving overall performance.

IEEE Xplore (Security & AI Journals)
05

Proactive Bot Detection Based on Structural Information Principles

researchsecurity
Dec 22, 2025

This research proposes SIAMD, a framework for detecting social media bots (automated accounts that spread misinformation) before they cause harm. The system analyzes patterns in how user accounts interact with messages, uses structural entropy (a measure of uncertainty in data patterns) to identify bot-like behavior, and generates synthetic bot messages with large language models (AI systems trained on text data) to test and improve detection systems.

IEEE Xplore (Security & AI Journals)
06

Exploring the Vulnerabilities of Federated Learning: A Deep Dive Into Gradient Inversion Attacks

securityresearch
Dec 22, 2025

Federated Learning (FL, a method where multiple computers train an AI model together without sharing raw data) can leak private information through gradient inversion attacks (GIA, techniques that reconstruct sensitive data from the mathematical updates used in training). This paper reviews three types of GIA methods and finds that while optimization-based GIA is most practical, generation-based and analytics-based GIA have significant limitations, and proposes a three-stage defense pipeline for FL frameworks.

Fix: The source mentions 'a three-stage defense pipeline to users when designing FL frameworks and protocols for better privacy protection,' but does not explicitly describe what this pipeline contains or how to implement it.

IEEE Xplore (Security & AI Journals)
07

CVE-2025-68478: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.7.0, if an arbitrary p

security
Dec 19, 2025

Langflow, a tool for building AI-powered agents and workflows, has a vulnerability in versions before 1.7.0 where an attacker can specify any file path in a request to create or overwrite files anywhere on the server. The vulnerability exists because the server doesn't restrict or validate the file paths, allowing attackers to write files to sensitive locations like system directories.

Fix: Update Langflow to version 1.7.0, which fixes the issue.

NVD/CVE Database
08

CVE-2025-68477: Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.7.0, Langflow provides

security
Dec 19, 2025

Langflow, a tool for building AI-powered agents and workflows, has a vulnerability in versions before 1.7.0 where its API Request component can make arbitrary HTTP requests to internal network addresses. An attacker with an API key could exploit this SSRF (server-side request forgery, where a server is tricked into making requests to unintended targets) to access sensitive internal resources like databases and metadata services, potentially stealing information or preparing further attacks.

Fix: Update to version 1.7.0 or later, which contains a patch for this issue.

NVD/CVE Database
09

Can chatbots craft correct code?

safetyresearch
Dec 19, 2025

The article argues that while AI language models (LLMs, systems trained on large amounts of text to generate responses) and traditional programming languages both increase abstraction, they differ fundamentally in a critical way: compilers are deterministic (they reliably produce the same output every time), while LLMs are nondeterministic (they produce different outputs for the same input). This matters for software security and correctness because compilers preserve the programmer's intended meaning through the translation process, but LLMs cannot guarantee they will generate code that does what you actually need.

Trail of Bits Blog
10

Evolving AI Transparency: The Journey of the AIBOM Generator and Its New Home at OWASP

securitypolicy
Dec 18, 2025

The AIBOM Generator, an open-source tool that creates an AI Software Bill of Materials (AIBOM, a structured document listing key information about an AI model like its data sources and configurations), has been moved to OWASP (a nonprofit focused on software security) to enable broader community collaboration and development. The tool helps organizations understand what's inside AI models, where they came from, and how trustworthy their documentation is, addressing a gap between rapid AI adoption and lagging transparency practices. The project is now part of the OWASP GenAI Security Project and will continue improving AI supply chain visibility through community-driven enhancements.

OWASP GenAI Security
Prev1...227228229230231...371Next