Uncertainty-Aware Likelihood Ratio Estimation for Pixel-Wise Out-of-Distribution Detection
- URL: http://arxiv.org/abs/2508.00587v1
- Date: Fri, 01 Aug 2025 12:39:16 GMT
- Title: Uncertainty-Aware Likelihood Ratio Estimation for Pixel-Wise Out-of-Distribution Detection
- Authors: Marc Hölle, Walter Kellermann, Vasileios Belagiannis,
- Abstract summary: We introduce an uncertainty-aware likelihood ratio estimation method to distinguish between known and unknown pixel features.<n>We show that by incorporating uncertainty in this way, outlier exposure can be leveraged more effectively.<n>Our method achieves the lowest average false positive rate (2.5%) among state-of-the-art while maintaining high average precision (90.91%) and incurring only negligible computational overhead.
- Score: 21.530506551095296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation models trained on known object classes often fail in real-world autonomous driving scenarios by confidently misclassifying unknown objects. While pixel-wise out-of-distribution detection can identify unknown objects, existing methods struggle in complex scenes where rare object classes are often confused with truly unknown objects. We introduce an uncertainty-aware likelihood ratio estimation method that addresses these limitations. Our approach uses an evidential classifier within a likelihood ratio test to distinguish between known and unknown pixel features from a semantic segmentation model, while explicitly accounting for uncertainty. Instead of producing point estimates, our method outputs probability distributions that capture uncertainty from both rare training examples and imperfect synthetic outliers. We show that by incorporating uncertainty in this way, outlier exposure can be leveraged more effectively. Evaluated on five standard benchmark datasets, our method achieves the lowest average false positive rate (2.5%) among state-of-the-art while maintaining high average precision (90.91%) and incurring only negligible computational overhead. Code is available at https://github.com/glasbruch/ULRE.
Related papers
- Road Obstacle Detection based on Unknown Objectness Scores [0.0]
Anomaly-detection techniques make it possible to identify pixels of unknown objects as out-of-distribution (OoD) samples.
The purpose of this study is to achieve stable performance for detecting unknown objects by incorporating the object-detection fashions into the pixel-wise anomaly detection methods.
arXiv Detail & Related papers (2024-03-27T02:35:36Z) - Credible Teacher for Semi-Supervised Object Detection in Open Scene [106.25850299007674]
In Open Scene Semi-Supervised Object Detection (O-SSOD), unlabeled data may contain unknown objects not observed in the labeled data.
It is detrimental to the current methods that mainly rely on self-training, as more uncertainty leads to the lower localization and classification precision of pseudo labels.
We propose Credible Teacher, an end-to-end framework to prevent uncertain pseudo labels from misleading the model.
arXiv Detail & Related papers (2024-01-01T08:19:21Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - Confidence-Aware and Self-Supervised Image Anomaly Localisation [7.099105239108548]
We discuss an improved self-supervised single-class training strategy that supports the approximation of probabilistic inference with loosen feature locality constraints.
Our method is integrated into several out-of-distribution (OOD) detection models and we show evidence that our method outperforms the state-of-the-art on various benchmark datasets.
arXiv Detail & Related papers (2023-03-23T12:48:47Z) - The Treasure Beneath Multiple Annotations: An Uncertainty-aware Edge
Detector [70.43599299422813]
Existing methods fuse multiple annotations using a simple voting process, ignoring the inherent ambiguity of edges and labeling bias of annotators.
We propose a novel uncertainty-aware edge detector (UAED), which employs uncertainty to investigate the subjectivity and ambiguity of diverse annotations.
UAED achieves superior performance consistently across multiple edge detection benchmarks.
arXiv Detail & Related papers (2023-03-21T13:14:36Z) - Pixel-wise Gradient Uncertainty for Convolutional Neural Networks
applied to Out-of-Distribution Segmentation [0.43512163406552007]
We present a method for obtaining uncertainty scores from pixel-wise loss gradients which can be computed efficiently during inference.
Our experiments show the ability of our method to identify wrong pixel classifications and to estimate prediction quality at negligible computational overhead.
arXiv Detail & Related papers (2023-03-13T08:37:59Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - Calibrating Ensembles for Scalable Uncertainty Quantification in Deep
Learning-based Medical Segmentation [0.42008820076301906]
Uncertainty quantification in automated image analysis is highly desired in many applications.
Current uncertainty quantification approaches do not scale well in high-dimensional real-world problems.
We propose a scalable and intuitive framework to calibrate ensembles of deep learning models to produce uncertainty quantification measurements.
arXiv Detail & Related papers (2022-09-20T09:09:48Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - An Uncertainty Estimation Framework for Probabilistic Object Detection [5.83620245905973]
We introduce a new technique that combines two popular methods to estimate uncertainty in object detection.
Our framework employs deep ensembles and Monte Carlo dropout for approximating predictive uncertainty.
arXiv Detail & Related papers (2021-06-28T22:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.