All tracked items across vulnerabilities, news, research, incidents, and regulatory updates.
This editorial introduces a special issue examining how evolving information technology and society will shape the future of work, jobs, and professional roles. It calls for research that projects multiple possible futures, evaluates which outcomes are most valuable, and identifies steps organizations can take now to work toward their preferred future states.
Between July 2024 and February 2025, malicious DNG files (a raw image format) were discovered that exploited a Samsung vulnerability through the Quram image parsing library. These files were sent via WhatsApp and triggered a spyware infection when users clicked to download the images, which then allowed the malware to run within Samsung's com.samsung.ipservice process, a system service that automatically scans images for AI-powered features.
LibreChat (a ChatGPT alternative with extra features) versions 0.8.0 and below have a security flaw where JSON parsing errors aren't properly handled, causing user input to appear in error messages. This can expose HTML or JavaScript code in responses, creating an XSS risk (cross-site scripting, where attackers inject malicious code that runs in users' browsers).
LibreChat versions 0.8.0 and below have a vulnerability where JSON requests sent to modify prompts aren't properly checked for valid input, allowing users to change prompts in unintended ways through a PATCH endpoint (a request type that modifies existing data). The vulnerability occurs because the patchPromptGroup function passes user input directly without filtering out sensitive fields that shouldn't be modifiable.
LibreChat, a ChatGPT clone with extra features, has a vulnerability in versions 0.8.0 and below where an attacker can modify the iconURL parameter (a web address for an icon image) in chat posts. This malicious code gets saved and can be shared to other users, potentially exposing their private information through malicious trackers when they view the shared chat link. The vulnerability is caused by improper handling of HTML content (XSS, or cross-site scripting, where attackers inject malicious code into web pages).
CVE-2025-67511 is a command injection vulnerability (a flaw where attackers can insert malicious commands into input) in Cybersecurity AI (CAI), an open-source framework for building AI agents that handle security tasks. Versions 0.5.9 and earlier are vulnerable because the run_ssh_command_with_credentials() function only escapes (protects) the password and command inputs, leaving the username, host, and port values open to attack.
Neuron is a PHP framework for creating AI agents that can perform tasks, and versions 2.8.11 and earlier have a vulnerability in the MySQLWriteTool component. The tool runs database commands without checking if they're safe, allowing attackers to use prompt injection (tricking the AI by hiding instructions in its input) to execute harmful SQL commands like deleting entire tables or changing permissions if the database user has broad access rights.
Neuron is a PHP framework for building AI agents that can query databases. Versions 2.8.11 and below have a flaw in MySQLSelectTool, a component meant to safely let AI agents read from databases. The tool only checks if a command starts with SELECT and blocks certain words, but misses SQL commands like INTO OUTFILE that write files to disk. An attacker could use prompt injection (tricking an AI by hiding instructions in its input) through a public agent endpoint to write files to the database server if it has the right permissions.
NVIDIA Merlin Transformers4Rec for Linux has a vulnerability in its Trainer component involving deserialization of untrusted data (treating unverified data as legitimate code or objects). A user exploiting this flaw could potentially run arbitrary code, crash the system (denial of service), steal information, or modify data.
This research paper addresses a problem in differentially private federated learning (DP-FL, a technique that trains AI models across multiple devices while adding mathematical noise to protect privacy). The paper proposes a new control framework that dynamically adjusts both the amount of noise added and how many communication rounds occur during training, rather than using fixed or randomly adjusted noise levels. Experiments show this approach achieves faster convergence (reaching a good solution quicker) and better accuracy while maintaining the same privacy guarantees.
Fix: The exploited Samsung vulnerability was fixed in April 2025.
Google Project ZeroFix: Update to version 0.8.1, where this issue is fixed.
NVD/CVE DatabaseFix: This issue is fixed in version 0.8.1. Users should upgrade to LibreChat version 0.8.1 or later.
NVD/CVE DatabaseThis research addresses the problem that deepfake detection systems (AI trained to identify manipulated images created by generative models like GANs and diffusion models) often fail when encountering new or unfamiliar types of forgeries. The authors propose RSG-DA, a framework that improves detection by generating diverse fake samples and using a dual augmentation strategy (data transformation techniques applied in two different ways) to help the AI learn to recognize a wider range of forgery patterns, along with a lightweight module to make these learned patterns work better across different datasets.
Researchers demonstrated a new attack method called ASBA (APK-Specific Backdoor Attack) that can compromise Android malware detection systems by injecting poisoned training data. Unlike previous attacks that use the same trigger across many malware samples, ASBA uses a generative adversarial network (GAN, an AI technique that learns to create realistic fake data) to generate unique triggers for each malware sample, making it harder for security tools to detect and block multiple instances of malware at once.
GitHub's CodeQL multi-repository variant analysis (MRVA) lets you run security bug-finding queries across thousands of projects quickly, but it's built mainly for VS Code. A developer created mrva, a terminal-based alternative that runs on your machine and works with command-line tools, letting you download pre-built CodeQL databases (collections of code information), analyze them with queries, and display results in the terminal.
Fix: Update to version 2.8.12, which fixes this issue.
NVD/CVE DatabaseFix: Fixed in version 2.8.12.
NVD/CVE DatabaseXSS attacks (malicious code injected into websites to steal user data) are hard to detect because attackers can create adversarial samples that trick detection models into missing threats. This paper proposes a new detection model using two-stage AST (abstract syntax tree, a structural representation of code) analysis combined with LSTM (long short-term memory, a type of neural network good at processing sequences) to better identify malicious code while resisting adversarial tricks, achieving over 98.2% detection accuracy even against adversarial attacks.
This research proposes a new system that combines blockchain (a decentralized ledger that records transactions) with zero-knowledge proofs (cryptographic methods that prove something is true without revealing the underlying data) to make AI model inference more trustworthy and private. The system verifies both where the input data comes from and where the AI model weights (the learned parameters that control how an AI makes decisions) come from, while keeping user information confidential. The authors demonstrate their approach with a privacy-preserving transaction system that can detect suspicious activity without exposing private data.
WiFi-based sensing systems that use deep learning (AI models trained on large amounts of data) are vulnerable to adversarial perturbation attacks, where attackers subtly manipulate wireless signals to fool the system into making wrong predictions. Researchers developed WiIntruder, a new attack method that can work across different applications and evade detection, reducing the accuracy of WiFi sensing services by an average of 72.9%, highlighting a significant security gap in these systems.
This research paper studies the challenge of balancing two competing goals in decentralized learning (where multiple computers train an AI model together without a central server): keeping each computer's data private while protecting against Byzantine attacks (when some computers deliberately send false information to sabotage the learning process). The authors found that using Gaussian noise (random mathematical noise added to messages) to protect privacy actually makes it harder to defend against Byzantine attacks, creating a fundamental tradeoff between these two security goals.
This research proposes a Fairly Proportional Noise Mechanism (FPNM) to address a problem in differential privacy (DP, a technique that adds random noise to data to protect individual privacy while allowing statistical analysis). Traditional DP methods add noise uniformly without considering fairness, which can unfairly affect different groups of people differently, especially in decision-making and learning tasks. The new FPNM approach adjusts noise based on both its direction and size relative to the actual data values, reducing unfairness by about 17-19% in experiments while maintaining privacy protections.
OWASP has released a Top 10 list of security risks specifically for agentic AI applications, which are autonomous AI systems that can use tools and take actions on their own. This framework was built from real incidents and industry experience to help organizations secure these advanced AI systems as they become more common.
The OWASP GenAI Security Project (an open-source community focused on AI safety) has released a list of the top 10 security risks for agentic AI (AI systems that can take actions independently). This guidance was created with input from over 100 industry experts and is meant to help organizations understand and address threats to AI systems.