Calibrated and Efficient Sampling-Free Confidence Estimation for LiDAR Scene Semantic Segmentation
- URL: http://arxiv.org/abs/2411.11935v1
- Date: Mon, 18 Nov 2024 15:13:20 GMT
- Title: Calibrated and Efficient Sampling-Free Confidence Estimation for LiDAR Scene Semantic Segmentation
- Authors: Hanieh Shojaei Miandashti, Qianqian Zou, Claus Brenner,
- Abstract summary: We introduce a sampling-free approach for estimating well-calibrated confidence values for classification tasks.
Our approach maintains well-calibrated confidence values while achieving increased processing speed.
Our method produces underconfidence rather than overconfident predictions, an advantage for safety-critical applications.
- Score: 1.8861801513235323
- License:
- Abstract: Reliable deep learning models require not only accurate predictions but also well-calibrated confidence estimates to ensure dependable uncertainty estimation. This is crucial in safety-critical applications like autonomous driving, which depend on rapid and precise semantic segmentation of LiDAR point clouds for real-time 3D scene understanding. In this work, we introduce a sampling-free approach for estimating well-calibrated confidence values for classification tasks, achieving alignment with true classification accuracy and significantly reducing inference time compared to sampling-based methods. Our evaluation using the Adaptive Calibration Error (ACE) metric for LiDAR semantic segmentation shows that our approach maintains well-calibrated confidence values while achieving increased processing speed compared to a sampling baseline. Additionally, reliability diagrams reveal that our method produces underconfidence rather than overconfident predictions, an advantage for safety-critical applications. Our sampling-free approach offers well-calibrated and time-efficient predictions for LiDAR scene semantic segmentation.
Related papers
- Confidence Intervals and Simultaneous Confidence Bands Based on Deep Learning [0.36832029288386137]
We provide a valid non-parametric bootstrap method that correctly disentangles data uncertainty from the noise inherent in the adopted optimization algorithm.
The proposed ad-hoc method can be easily integrated into any deep neural network without interfering with the training process.
arXiv Detail & Related papers (2024-06-20T05:51:37Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - TeLeS: Temporal Lexeme Similarity Score to Estimate Confidence in
End-to-End ASR [1.8477401359673709]
Class-probability-based confidence scores do not accurately represent quality of overconfident ASR predictions.
We propose a novel Temporal-Lexeme Similarity (TeLeS) confidence score to train Confidence Estimation Model (CEM)
We conduct experiments with ASR models trained in three languages, namely Hindi, Tamil, and Kannada, with varying training data sizes.
arXiv Detail & Related papers (2024-01-06T16:29:13Z) - On the Calibration of Uncertainty Estimation in LiDAR-based Semantic
Segmentation [7.100396757261104]
We propose a metric to measure the confidence calibration quality of a semantic segmentation model with respect to individual classes.
We additionally suggest a double use for the method to automatically find label problems to improve the quality of hand- or auto-annotated datasets.
arXiv Detail & Related papers (2023-08-04T10:59:24Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Uncertainty-aware LiDAR Panoptic Segmentation [21.89063036529791]
We introduce a novel approach for solving the task of uncertainty-aware panoptic segmentation using LiDAR point clouds.
Our proposed EvLPSNet network is the first to solve this task efficiently in a sampling-free manner.
We provide several strong baselines combining state-of-the-art panoptic segmentation networks with sampling-free uncertainty estimation techniques.
arXiv Detail & Related papers (2022-10-10T07:54:57Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Locally Valid and Discriminative Confidence Intervals for Deep Learning
Models [37.57296694423751]
Uncertainty information should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high)
Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance.
We propose Locally Valid and Discriminative confidence intervals (LVD), a simple, efficient and lightweight method to construct discriminative confidence intervals (CIs) for almost any deep learning model.
arXiv Detail & Related papers (2021-06-01T04:39:56Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.