aisecwatch.com
DashboardVulnerabilitiesNewsResearchArchiveStatsDataset
aisecwatch.com

Real-time AI security monitoring. Tracking AI-related vulnerabilities, safety and security incidents, privacy risks, research developments, and policy changes.

Navigation

VulnerabilitiesNewsResearchDigest ArchiveNewsletter ArchiveSubscribeData SourcesStatisticsDatasetAPIIntegrationsWidgetRSS Feed

Maintained by

Truong (Jack) Luu

Information Systems Researcher

Research

Academic papers, new techniques, benchmarks, and theoretical findings in AI/LLM security.

to
Export CSV
227 items

Leveraging Cybersecurity for Capital Creation

inforesearchPeer-Reviewed
policy
Sep 30, 2025

This academic paper argues that companies should view cybersecurity not just as a defensive cost (like insurance to prevent losses), but as a strategic investment that creates business value and competitive advantages. The paper offers guidance to information systems leaders on how organizations can benefit financially and operationally by practicing strong cybersecurity.

AIS eLibrary (Journal of AIS, CAIS, etc.)

A Three-Layer Model for Successful Organizational Digital Transformation

inforesearchPeer-Reviewed
research

Successfully Mitigating AI Management Risks to Scale AI Globally

inforesearchPeer-Reviewed
research

Building Confidential Accelerator Computing Environment for Arm CCA

inforesearchPeer-Reviewed
research

Communicating Cybersecurity Decisions and Their Rationales Explicitly During and After CPS Design

inforesearchPeer-Reviewed
policy

Blockchain-Assisted Weighted Threshold EdDSA With Rational Identifiable Aborts

inforesearchPeer-Reviewed
research

Ultimate Encrypted Traffic Feature Engineering: HTTPS Encrypted Traffic Classification Using Restored Application Data Unit Length

inforesearchPeer-Reviewed
research

Toward Efficient Multi-User Access Control Encrypted Search for Web Data Management

inforesearchPeer-Reviewed
research

AI-Shielder: Exploiting Backdoors to Defend Against Adversarial Attacks

inforesearchPeer-Reviewed
security

A New $k$k-Anonymity Method Based on Generalization First $k$k-Member Clustering for Healthcare Data

inforesearchPeer-Reviewed
research

Secure Moving Object Detection in Compressed Video Using Attentions

inforesearchPeer-Reviewed
research

SMS: Self-Supervised Model Seeding for Verification of Machine Unlearning

inforesearchPeer-Reviewed
research

ASGA: Attention-Based Sparse Global Attack to Video Action Recognition

inforesearchPeer-Reviewed
security

An Empirical Study of Federated Learning on IoT–Edge Devices: Resource Allocation and Heterogeneity

inforesearchPeer-Reviewed
research

Hecate: Threshold Anonymous Credentials With Private Verifiers and Issuer-Hiding

inforesearchPeer-Reviewed
security

Efficient Instruction Vulnerability Prediction With Heterogeneous SDC Propagation Knowledge Graph

inforesearchPeer-Reviewed
research

RDSAD: Robust Threat Detection in Evolving Data Streams via Adaptive Latent Dynamics

inforesearchPeer-Reviewed
research

Forseti: A Decentralized Permission Transfer Framework for IoT Leasing

inforesearchPeer-Reviewed
security

Anti-Spoofing and Mask-Supported Face Authentication Using mmWave Without On-Site Registration

inforesearchPeer-Reviewed
security

OCEAN: Optional Capability-Based En Route Acknowledgement in Network Layer

inforesearchPeer-Reviewed
security
Previous10 / 12Next
Sep 30, 2025

This source describes a three-layer model for digital transformation in organizations, based on a case study of automotive supplier Continental AG. The model emphasizes that successful digital transformation requires simultaneous changes across IT systems, work practices (how employees actually do their jobs), and mindset evolution (how people think about their work), with these layers reinforcing each other.

AIS eLibrary (Journal of AIS, CAIS, etc.)
Sep 30, 2025

Many companies find it difficult to scale AI systems (machine learning models that learn patterns from data) globally because these systems make existing technology management problems worse and introduce new challenges. Based on a study of how industrial company Siemens AG handles this, the source identifies five critical risks in managing AI technology and offers recommendations for successfully deploying AI systems across an entire organization.

AIS eLibrary (Journal of AIS, CAIS, etc.)
security
Sep 30, 2025

This research presents CAGE, a system that adds support for confidential accelerators (specialized processing hardware like GPUs and FPGAs) to Arm CCA (Confidential Computing Architecture, which creates isolated execution regions called realms for protecting sensitive data). The system uses a novel shadow task mechanism and memory isolation to protect data confidentiality and integrity without requiring hardware changes, achieving this with only moderate performance overhead.

IEEE Xplore (Security & AI Journals)
Sep 30, 2025

This research addresses how organizations should communicate security decisions for cyber-physical systems (CPS, which are machines that combine computing and physical operations like power plants or medical devices). Instead of just listing security requirements, the authors propose "Cyber Decision Diagrams," a visual tool that explains the reasoning behind security choices so that users, auditors, and manufacturers can better understand and collaborate on system security.

IEEE Xplore (Security & AI Journals)
Sep 29, 2025

This paper presents EdFROST, a new threshold EdDSA (a cryptographic signature scheme used in distributed systems) protocol that detects malicious behavior more efficiently than previous methods while reducing computational overhead from zero-knowledge proofs (mathematical techniques that prove something is true without revealing how). The authors also propose a weighted threshold signature system that prevents powerful participants from dominating decisions and uses game theory (the study of strategic decision-making) with blockchain incentives to encourage honest behavior and resist DDoS attacks (attempts to overwhelm a system with traffic).

Fix: The source proposes EdFROST as the solution, which is described as being "unforgeable and supports identifiable aborts under a chosen-message attack." The paper also states that they "design a game-theoretic incentive model, implemented via tamper-proof chaincode, achieving rational identifiable aborts with a unique sequential equilibrium" to incentivize honest behavior, ensure efficient abort handling, and resist DDoS attacks. The authors note that "experimental results demonstrate that the EdFROST and chaincode are efficient and lightweight, making them well-suited for large-scale distributed systems."

IEEE Xplore (Security & AI Journals)
Sep 29, 2025

This research presents a method to classify encrypted internet traffic (HTTPS, a protocol that scrambles data sent over the internet) by reconstructing the original application data sizes hidden beneath encryption layers. The researchers developed an algorithm called LC-MRNN (Length-Correction Multiple Regression Neural Network, a type of machine learning model) to accurately restore these hidden data lengths, which helps network administrators and security teams identify what applications users are running, even when the actual data is encrypted.

IEEE Xplore (Security & AI Journals)
Sep 29, 2025

This research presents SEOMA, a new system for searchable encryption (SE, a method that lets users store encrypted data on servers while still being able to search it by keywords without revealing the data's contents). The system improves on existing approaches by supporting multiple users accessing the same data while also verifying that the data owner is legitimate and preventing malicious owners from uploading fake encrypted files. SEOMA uses attribute encryption (a technique that controls who can decrypt data based on their characteristics) and access control policies to manage which users can access what data, while using less computing power and bandwidth than previous solutions.

IEEE Xplore (Security & AI Journals)
research
Sep 29, 2025

Deep neural networks (DNNs, machine learning models with many layers that learn patterns from data) are vulnerable to adversarial attacks, where small, carefully crafted changes to input data trick the AI into making wrong predictions, especially in critical areas like self-driving cars. This paper presents AI-Shielder, a method that intentionally embeds backdoors (hidden pathways that alter how the model behaves) into neural networks to detect and block adversarial attacks while keeping the AI's normal performance intact. Testing shows AI-Shielder reduces successful attacks from 91.8% to 3.8% with only minor slowdowns.

Fix: AI-Shielder is the proposed solution presented in the paper. According to the results, it 'reduces the attack success rate from 91.8% to 3.8%, which outperforms the state-of-the-art works by 37.2%, with only a 0.6% decline in the clean data accuracy' and 'introduces only 1.43% overhead to the model prediction time, almost negligible in most cases.' The approach works by leveraging intentionally embedded backdoors to fail adversarial perturbations while maintaining original task performance.

IEEE Xplore (Security & AI Journals)
privacy
Sep 29, 2025

Healthcare organizations are collecting more patient data than ever, which creates privacy risks. This research proposes GFKMC (Generalization First k-Member Clustering), a new privacy method that protects patient identities by grouping similar records together while keeping the data useful for analysis, and it works better than older methods by losing less information when privacy protection is increased.

IEEE Xplore (Security & AI Journals)
privacy
Sep 29, 2025

This research presents a method for detecting moving objects in encrypted video without decrypting it, protecting privacy when video processing is done in the cloud. The approach uses selective encryption (encrypting only certain parts of compressed video) and extracts motion information from encrypted video data, then applies deep learning with attention mechanisms (a technique that helps the AI focus on important regions) to identify moving objects even with incomplete information.

IEEE Xplore (Security & AI Journals)
security
Sep 29, 2025

Machine unlearning (the process of removing a user's data from a trained AI model) needs verification to confirm that genuine user data was actually deleted, but current methods using backdoors (hidden triggers added to test if data is gone) can't properly verify removal of real user samples. This paper proposes SMS, or Self-Supervised Model Seeding, which embeds user-specific identifiers into the model's internal representation to directly link users' actual data with the model, enabling better verification that genuine samples were truly unlearned.

IEEE Xplore (Security & AI Journals)
research
Sep 26, 2025

This paper presents ASGA, a method for creating adversarial attacks (small, crafted changes meant to trick AI models) on video action recognition systems (AI models that identify what actions people are performing in videos). The key innovation is that attackers can compute perturbations (the malicious changes) just once on important keyframes (selected frames that represent the video's content), then replicate these changes across the entire video, making the attack work even when the model samples frames differently and reducing computational cost.

IEEE Xplore (Security & AI Journals)
Sep 26, 2025

This research studies federated learning (FL, a method where multiple devices collaboratively train an AI model without sending their data to a central server) on real IoT and edge devices (small computing devices like phones and sensors) rather than in simulated environments. The study examines how FL performs in realistic conditions, focusing on heterogeneous scenarios (situations where devices have different computing power, network speeds, and data types), and provides insights to help researchers and practitioners build more practical FL systems.

IEEE Xplore (Security & AI Journals)
Sep 25, 2025

Hecate is a framework for anonymous credentials (a system allowing users to prove they have certain attributes without revealing their identity) that adds protection for verifiers, the entities checking credentials, while maintaining threshold issuance (requiring multiple parties to approve a credential) and issuer-hiding (hiding which organization issued the credential). The system uses a dual-credential design to let both verifiers and users set policies about who can access information, and testing shows it can verify credentials quickly, in about 37-60 milliseconds.

IEEE Xplore (Security & AI Journals)
Sep 25, 2025

Silent Data Corruption (SDC, where a computer system produces wrong outputs without alerting anyone) is a growing problem in modern chip designs, but current detection methods are inefficient or inaccurate. Researchers proposed VP-HPKG, a new approach that uses a knowledge graph (a map of how instructions relate to each other) combined with neural network techniques to predict which instructions are vulnerable to SDC and detect error propagation paths more efficiently than existing methods.

IEEE Xplore (Security & AI Journals)
security
Sep 24, 2025

RDSAD is an AI-based security system designed to detect cyberattacks on Cyber-Physical Systems (CPSs, which are machines that combine physical equipment with software to automate industrial processes). The system works without manual labeling and uses two techniques: one to understand how the system normally behaves, and another to adapt when patterns change, helping it catch attacks while avoiding false alarms.

IEEE Xplore (Security & AI Journals)
Sep 24, 2025

IoT devices used in rental situations like Airbnbs need secure ways to transfer permission (access rights) from owners to renters, but current systems don't properly prevent problems like a malicious owner keeping camera access after handing it over. Forseti is a new authorization framework that uses zero-knowledge proof (a cryptographic method proving something is true without revealing the details) and a decentralized ledger (a shared, distributed record not controlled by any single party) to protect both owners' and renters' control over devices during permission transfers.

Fix: The source presents Forseti as a proposed solution framework that 'leverages zero-knowledge proof and a decentralized ledger to ensure that the rights of both hosts and tenants are not violated.' However, the source does not describe a specific implementation step, patch, update, or deployment procedure that users can apply.

IEEE Xplore (Security & AI Journals)
Sep 24, 2025

This research presents mmFace, a face authentication system that uses millimeter wave radar (mmWave, radio signals that can penetrate materials and detect fine details) instead of cameras to verify a person's identity while resisting spoofing attacks (fake faces or replayed recordings). The system works even when users wear masks because mmWave signals can pass through them, and it uses techniques like liveness detection (checking that a face is real and alive) and amplitude modulation-based methods to prevent attackers from fooling it with fake faces or recorded videos.

IEEE Xplore (Security & AI Journals)
Sep 24, 2025

OCEAN is a security system designed for Industrial IoT (the use of connected devices in factories and industrial settings) that aims to prevent packet loss (data getting dropped during transmission) while keeping data transmission fast and secure. It uses specialized hardware (an ASIC and FPGA, which are types of programmable computer chips) combined with a network protocol (set of rules for how data moves between devices) that verifies packets at each hop and caches (temporarily stores) them until receiving confirmation they arrived safely.

IEEE Xplore (Security & AI Journals)