AISM: Adversarial image steganography model for defending unauthorized recognition
Summary
Researchers have developed AISM (adversarial image steganography model, a technique that hides data inside images while making them resistant to AI recognition), a method for protecting images from being recognized by unauthorized AI systems. The approach uses adversarial techniques (methods that deliberately trick AI models by adding subtle, invisible changes to data) combined with steganography (the practice of hiding information within other data) to prevent unwanted AI analysis while keeping the images visually normal to humans. This work addresses privacy concerns where people want to prevent their images from being processed by AI systems without permission.
Classification
Related Issues
Original source: https://www.sciencedirect.com/science/article/pii/S2214212626000839?dgcid=rss_sd_all
First tracked: April 3, 2026 at 02:01 PM
Classified by LLM (prompt v3) · confidence: 75%