A Systematic Review on Human Roles, Solutions, and Methodological Approaches to Address Bias in AI
inforesearchPeer-Reviewed
researchsafety
Source: ACM Digital Library (TOPS, DTRAP, CSUR)March 16, 2026
Summary
This academic review examines how bias (systematic unfairness in AI decision-making) occurs in AI systems and explores the human roles, solutions, and research methods used to identify and reduce it. The paper surveys existing approaches to addressing bias rather than proposing a single new solution.
Classification
Attack SophisticationModerate
Impact (CIA+S)
safety
Original source: https://dl.acm.org/doi/abs/10.1145/3793667?af=R
First tracked: March 16, 2026 at 05:11 PM
Classified by LLM (prompt v3) · confidence: 92%