On the Calibration of Uncertainty Estimation in LiDAR-based Semantic
Segmentation
- URL: http://arxiv.org/abs/2308.02248v1
- Date: Fri, 4 Aug 2023 10:59:24 GMT
- Title: On the Calibration of Uncertainty Estimation in LiDAR-based Semantic
Segmentation
- Authors: Mariella Dreissig, Florian Piewak, Joschka Boedecker
- Abstract summary: We propose a metric to measure the confidence calibration quality of a semantic segmentation model with respect to individual classes.
We additionally suggest a double use for the method to automatically find label problems to improve the quality of hand- or auto-annotated datasets.
- Score: 7.100396757261104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The confidence calibration of deep learning-based perception models plays a
crucial role in their reliability. Especially in the context of autonomous
driving, downstream tasks like prediction and planning depend on accurate
confidence estimates. In point-wise multiclass classification tasks like
sematic segmentation the model has to deal with heavy class imbalances. Due to
their underrepresentation, the confidence calibration of classes with smaller
instances is challenging but essential, not only for safety reasons. We propose
a metric to measure the confidence calibration quality of a semantic
segmentation model with respect to individual classes. It is calculated by
computing sparsification curves for each class based on the uncertainty
estimates. We use the classification calibration metric to evaluate uncertainty
estimation methods with respect to their confidence calibration of
underrepresented classes. We furthermore suggest a double use for the method to
automatically find label problems to improve the quality of hand- or
auto-annotated datasets.
Related papers
- Calibrated and Efficient Sampling-Free Confidence Estimation for LiDAR Scene Semantic Segmentation [1.8861801513235323]
We introduce a sampling-free approach for estimating well-calibrated confidence values for classification tasks.
Our approach maintains well-calibrated confidence values while achieving increased processing speed.
Our method produces underconfidence rather than overconfident predictions, an advantage for safety-critical applications.
arXiv Detail & Related papers (2024-11-18T15:13:20Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Two Sides of Miscalibration: Identifying Over and Under-Confidence
Prediction for Network Calibration [1.192436948211501]
Proper confidence calibration of deep neural networks is essential for reliable predictions in safety-critical tasks.
Miscalibration can lead to model over-confidence and/or under-confidence.
We introduce a novel metric, a miscalibration score, to identify the overall and class-wise calibration status.
We use the class-wise miscalibration score as a proxy to design a calibration technique that can tackle both over and under-confidence.
arXiv Detail & Related papers (2023-08-06T17:59:14Z) - Calibration-Aware Bayesian Learning [37.82259435084825]
This paper proposes an integrated framework, referred to as calibration-aware Bayesian neural networks (CA-BNNs)
It applies both data-dependent or data-independent regularizers while optimizing over a variational distribution as in Bayesian learning.
Numerical results validate the advantages of the proposed approach in terms of expected calibration error (ECE) and reliability diagrams.
arXiv Detail & Related papers (2023-05-12T14:19:15Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - On the calibration of underrepresented classes in LiDAR-based semantic
segmentation [7.100396757261104]
This work focuses on a class-wise evaluation of several models' confidence performance for LiDAR-based semantic segmentation.
We compare the calibration abilities of three semantic segmentation models with different architectural concepts.
By identifying and describing the dependency between the predictive performance of a class and the respective calibration quality we aim to facilitate the model selection and refinement for safety-critical applications.
arXiv Detail & Related papers (2022-10-13T07:49:24Z) - Estimating Model Performance under Domain Shifts with Class-Specific
Confidence Scores [25.162667593654206]
We introduce class-wise calibration within the framework of performance estimation for imbalanced datasets.
We conduct experiments on four tasks and find the proposed modifications consistently improve the estimation accuracy for imbalanced datasets.
arXiv Detail & Related papers (2022-07-20T15:04:32Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.