Academic papers, new techniques, benchmarks, and theoretical findings in AI/LLM security.
RanDS is a new large-scale dataset containing raw binary files (the compiled machine code of programs) and extracted features designed to help researchers study and detect ransomware (malicious software that encrypts victims' files and demands payment). This resource aims to support the development and testing of machine learning models that can identify ransomware threats more effectively.
This research paper addresses security and transparency challenges in cloud storage for UAV (unmanned aerial vehicle) data by proposing PATD, a system that combines privacy-preserving auditing with transparent deduplication. The paper identifies two main problems: verifying that outsourced data hasn't been corrupted or tampered with (without revealing the data itself), and ensuring that file deduplication (removing duplicate copies to save storage) is performed honestly and transparently by the cloud provider.
This is a research paper proposing EIP, an efficient image protection scheme designed to safeguard images from unauthorized access or tampering. The paper was published in June 2026 in the Journal of Information Security and Applications by Haider, Sattar, Komninos, and Hayat. However, the provided content does not include details about how the scheme works or what specific security problem it addresses.
PadNet is a defense method designed to protect neural networks (AI models that learn patterns from data) against adversarial examples (specially crafted inputs that trick AI systems into making wrong predictions). The paper, published in an academic journal, presents techniques to make these AI systems more robust when facing such attacks.
This research paper describes a method for protecting privacy in distributed gradient descent (a technique where multiple computers work together to train AI models by each processing part of the data). The authors propose using hierarchical secret sharing (a cryptographic approach where information is split into pieces distributed across multiple parties, so no single party can see the complete data) to keep individual data private while still allowing the AI training process to work efficiently.
This research paper, published in June 2026, presents a method for creating indexes in queryable-encrypted databases (databases where data stays encrypted even when being searched) that don't leak information about access patterns or query history. The approach aims to improve security by preventing attackers from inferring sensitive information about which data is being accessed based on observable patterns of database queries.
This research paper presents a method for searching location-based services (apps that use your geographic position, like finding nearby restaurants) while protecting user privacy and ensuring the results are trustworthy. The approach combines spatio-temporal (location and time-based) keyword searching with verifiability (a way to prove the results are correct), allowing users to query location services without exposing their exact location or search patterns to the service provider.
The OWASP GenAI Security Project, an open-source community focused on AI security, announced expansion of its resources and frameworks with over 25,000 members contributing practical guidance and tools. The project is being highlighted at the RSA 2026 conference, indicating growing industry adoption of AI security best practices.
This survey examines methods for automatically finding bugs in software code by using machine learning and AI models, tracing the evolution from traditional machine learning techniques to modern large language models (LLMs, which are AI systems trained on vast amounts of text data). The research covers how these AI-based approaches learn patterns to pinpoint where faults occur in code, making debugging faster and more efficient than manual inspection.
Decentralized Federated Learning (DFL, a way for multiple computers to train AI models together without a central server) is vulnerable to Byzantine attacks (when malicious participants send bad data to sabotage the learning process). The paper proposes FORCE, a new method that uses game theory concepts (mathematical models of strategy and fairness) to identify and exclude malicious clients by checking their model loss (how well their models perform) instead of checking gradients (the direction to improve), making DFL more resistant to these attacks.
This research addresses a weakness in active defense systems against deepfakes (AI-generated fake videos or images): these defenses often fail when attackers retrain their models on protected samples. The authors propose a Two-Stage Defense Framework (TSDF) that uses dual-function adversarial perturbations (carefully designed noise patterns that disrupt both the deepfake output and the attacker's retraining process) to make defenses more persistent by poisoning the data (corrupting the training information) that attackers would use to adapt their models.
Fix: The source describes the proposed defense framework (TSDF) as the solution but does not mention an existing patch, update, or mitigation for current systems. The paper presents the framework as a research contribution rather than a fix for deployed software. N/A -- no mitigation for existing systems discussed in source.
IEEE Xplore (Security & AI Journals)Android malware is a major security threat because the Android operating system's open app ecosystem allows unverified applications to be installed, making it easier for malicious software to spread and steal data, perform unauthorized financial transactions, or remotely control devices. Researchers are using machine learning (algorithms that learn patterns from data) to detect malware by analyzing features of Android application packages (APK files, the file format for Android apps), with recent research focusing on three main approaches: selecting the most important features to analyze, combining multiple detection models together, and handling datasets where malicious apps are much rarer than legitimate ones.
This academic paper is a systematic literature review (a comprehensive analysis of existing research) about physical unclonable functions, or PUFs, which are hardware-based security features that create unique, unchangeable identifiers for devices based on their physical properties. Published in July 2026, the review examines how PUFs are modeled and studied across different research papers. The paper does not describe a security problem or vulnerability, but rather surveys current knowledge about how these security devices work.
This is an academic survey paper published in ACM Computing Surveys that examines alignment of diffusion models (AI systems trained to generate images or other content by gradually removing noise from random data). The paper covers fundamental concepts, current challenges in making these models behave as intended, and directions for future research in this area.
This is a literature review article published in an academic journal that surveys how machine learning (algorithms that learn patterns from data to make predictions) is being applied to cybersecurity problems. The article covers research across the field but does not describe a specific security vulnerability or incident requiring a fix.
This is a survey article that reviews research on selective forgetting in machine learning, which is the ability to remove or reduce specific information from a trained AI model without completely retraining it from scratch. The article covers methods and applications of this technique across various AI systems and domains. The survey appears to be an academic overview of current knowledge in this area rather than describing a specific problem or vulnerability.
This academic review examines how bias (systematic unfairness in AI decision-making) occurs in AI systems and explores the human roles, solutions, and research methods used to identify and reduce it. The paper surveys existing approaches to addressing bias rather than proposing a single new solution.
This is an academic survey article published in ACM Computing Surveys that discusses a question bank designed to help assess risks in AI systems responsibly. The article appears to be a comprehensive review of how organizations can evaluate potential harms and safety concerns when developing or deploying AI, rather than describing a specific vulnerability or problem.
This academic paper is a systematic review published in ACM Computing Surveys that examines how trust works in artificial intelligence systems using established trust theory frameworks. The article analyzes trust in AI through theoretical lenses rather than addressing a specific technical vulnerability or problem.
This survey article reviews methods for detecting training data used to build large language models (LLMs, which are AI systems trained on massive amounts of text to generate human-like responses). The paper examines various techniques that researchers have developed to identify and extract information about what data was used to train these models, which is important for understanding model behavior and potential privacy concerns.