The security intelligence platform for AI teams
AI security threats move fast and get buried under hype and noise. Built by an Information Systems Security researcher to help security teams and developers stay ahead of vulnerabilities, privacy incidents, safety research, and policy developments.
Independent research. No sponsors, no paywalls, no conflicts of interest.
No new AI/LLM security issues were identified today.
This research introduces SEGA, a method for attacking No-Reference Image Quality Assessment models (AI systems that evaluate image quality without comparing to a reference image) in black-box scenarios where attackers cannot see the target model's code. SEGA works by using Gaussian smoothing (a mathematical technique that approximates gradients, or the direction of change in the model) across multiple source models and applying a filter to make attacks harder to detect. The method successfully demonstrates improved ability to transfer attacks across different NR-IQA models.