Revisiting Out-of-Distribution Detection in Real-Time Object Detection: From Benchmark Pitfalls to a New Mitigation Paradigm
Summary
Out-of-distribution (OoD, inputs that don't match what an AI was trained on) detection in object detection systems causes AI models to make overconfident wrong predictions on objects they shouldn't recognize. This paper reveals that popular benchmark datasets used to test OoD detection have quality problems, where up to 13% of test objects are mislabeled, making current methods appear better than they really are. The authors propose a new training-time approach where object detectors are fine-tuned using carefully created OoD training data that looks similar to normal objects, which reduces false detections by 91% in YOLO models.
Solution / Mitigation
The paper introduces a training-time mitigation paradigm where 'we fine-tune the detector using a carefully synthesized OoD dataset that semantically resembles in-distribution objects.' This approach 'shapes a defensive decision boundary by suppressing objectness on OoD objects' and achieves 'a 91% reduction in hallucination error of a YOLO model on BDD-100 K.' The methodology is shown to work across multiple detection architectures including YOLO, Faster R-CNN, and RT-DETR.
Classification
Affected Vendors
Original source: http://ieeexplore.ieee.org/document/11328890
First tracked: April 6, 2026 at 08:03 PM
Classified by LLM (prompt v3) · confidence: 92%