Bias-Free? An Empirical Study on Ethnicity, Gender, and Age Fairness in Deepfake Detection
inforesearchPeer-Reviewed
researchsafety
Source: ACM Digital Library (TOPS, DTRAP, CSUR)March 16, 2026
Summary
This research paper studies whether deepfake detection systems (AI tools that identify fake videos made to look real) are fair across different groups of people based on ethnicity, gender, and age. The study found that these detection systems often perform differently depending on the person's background, meaning they work better for some groups than others. The paper highlights that bias in deepfake detection is an important fairness problem that needs attention.
Classification
Attack SophisticationModerate
Impact (CIA+S)
safety
AI Component TargetedModel
Original source: https://dl.acm.org/doi/abs/10.1145/3796544?af=R
First tracked: March 16, 2026 at 05:11 PM
Classified by LLM (prompt v3) · confidence: 85%