Calibrated Perception Uncertainty Across Objects and Regions in
Bird's-Eye-View
- URL: http://arxiv.org/abs/2211.04340v1
- Date: Tue, 8 Nov 2022 16:01:17 GMT
- Title: Calibrated Perception Uncertainty Across Objects and Regions in
Bird's-Eye-View
- Authors: Markus K\"angsepp, Meelis Kull
- Abstract summary: We highlight limitations in the state-of-the-art and propose a more complete set of uncertainties to be reported.
We demonstrate that the obtained probabilities are not calibrated out-of-the-box and propose methods to achieve well-calibrated uncertainties.
- Score: 2.3943776585660976
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In driving scenarios with poor visibility or occlusions, it is important that
the autonomous vehicle would take into account all the uncertainties when
making driving decisions, including choice of a safe speed. The grid-based
perception outputs, such as occupancy grids, and object-based outputs, such as
lists of detected objects, must then be accompanied by well-calibrated
uncertainty estimates. We highlight limitations in the state-of-the-art and
propose a more complete set of uncertainties to be reported, particularly
including undetected-object-ahead probabilities. We suggest a novel way to get
these probabilistic outputs from bird's-eye-view probabilistic semantic
segmentation, in the example of the FIERY model. We demonstrate that the
obtained probabilities are not calibrated out-of-the-box and propose methods to
achieve well-calibrated uncertainties.
Related papers
- Exploring Aleatoric Uncertainty in Object Detection via Vision Foundation Models [46.71709927361625]
This paper suggests modeling and exploiting the uncertainty inherent in object detection data with vision foundation models.
We assume a mixture-of-Gaussian structure of the object features and devise Mahalanobis distance-based measures to quantify the data uncertainty.
The estimated aleatoric uncertainty serves as an extra level of annotations of the dataset, so it can be utilized in a plug-and-play manner with any model.
arXiv Detail & Related papers (2024-11-26T07:14:30Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - An Uncertainty Estimation Framework for Probabilistic Object Detection [5.83620245905973]
We introduce a new technique that combines two popular methods to estimate uncertainty in object detection.
Our framework employs deep ensembles and Monte Carlo dropout for approximating predictive uncertainty.
arXiv Detail & Related papers (2021-06-28T22:29:59Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Labels Are Not Perfect: Improving Probabilistic Object Detection via
Label Uncertainty [12.531126969367774]
We leverage our previously proposed method for estimating uncertainty inherent in ground truth bounding box parameters.
Experimental results on the KITTI dataset show that our method surpasses both the baseline model and the models based on simple uncertaintys by up to 3.6% in terms of Average Precision.
arXiv Detail & Related papers (2020-08-10T14:49:49Z) - Inferring Spatial Uncertainty in Object Detection [35.28872968233385]
We propose a generative model to estimate bounding box label uncertainties from LiDAR point clouds.
Comprehensive experiments show that the proposed model represents uncertainties commonly seen in driving scenarios.
We propose an extension of IoU, called the Jaccard IoU (JIoU), as a new evaluation metric that incorporates label uncertainty.
arXiv Detail & Related papers (2020-03-07T19:29:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.