LMD: Light-weight Prediction Quality Estimation for Object Detection in
Lidar Point Clouds
- URL: http://arxiv.org/abs/2306.07835v2
- Date: Thu, 15 Jun 2023 08:14:19 GMT
- Title: LMD: Light-weight Prediction Quality Estimation for Object Detection in
Lidar Point Clouds
- Authors: Tobias Riedlinger, Marius Schubert, Sarina Penquitt, Jan-Marcel
Kezmann, Pascal Colling, Karsten Kahl, Lutz Roese-Koerner, Michael Arnold,
Urs Zimmermann, Matthias Rottmann
- Abstract summary: Object detection on Lidar point cloud data is a promising technology for autonomous driving and robotics.
Uncertainty estimation is a crucial component for down-stream tasks and deep neural networks remain error-prone even for predictions with high confidence.
We propose LidarMetaDetect, a light-weight post-processing scheme for prediction quality estimation.
Our experiments show a significant increase of statistical reliability in separating true from false predictions.
- Score: 3.927702899922668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection on Lidar point cloud data is a promising technology for
autonomous driving and robotics which has seen a significant rise in
performance and accuracy during recent years. Particularly uncertainty
estimation is a crucial component for down-stream tasks and deep neural
networks remain error-prone even for predictions with high confidence.
Previously proposed methods for quantifying prediction uncertainty tend to
alter the training scheme of the detector or rely on prediction sampling which
results in vastly increased inference time. In order to address these two
issues, we propose LidarMetaDetect (LMD), a light-weight post-processing scheme
for prediction quality estimation. Our method can easily be added to any
pre-trained Lidar object detector without altering anything about the base
model and is purely based on post-processing, therefore, only leading to a
negligible computational overhead. Our experiments show a significant increase
of statistical reliability in separating true from false predictions. We
propose and evaluate an additional application of our method leading to the
detection of annotation errors. Explicit samples and a conservative count of
annotation error proposals indicates the viability of our method for
large-scale datasets like KITTI and nuScenes. On the widely-used nuScenes test
dataset, 43 out of the top 100 proposals of our method indicate, in fact,
erroneous annotations.
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Estimating Uncertainty with Implicit Quantile Network [0.0]
Uncertainty quantification is an important part of many performance critical applications.
This paper provides a simple alternative to existing approaches such as ensemble learning and bayesian neural networks.
arXiv Detail & Related papers (2024-08-26T13:33:14Z) - GVFs in the Real World: Making Predictions Online for Water Treatment [23.651798878534635]
We investigate the use of reinforcement-learning based prediction approaches for a real drinking-water treatment plant.
We first describe this dataset, and highlight challenges with seasonality, nonstationarity, partial observability.
We show the importance of learning in deployment, by comparing a TD agent trained purely offline with no online updating to a TD agent that learns online.
arXiv Detail & Related papers (2023-12-04T04:49:10Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Taming Overconfident Prediction on Unlabeled Data from Hindsight [50.9088560433925]
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning.
This paper proposes a dual mechanism, named ADaptive Sharpening (ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions.
ADS significantly improves the state-of-the-art SSL methods by making it a plug-in.
arXiv Detail & Related papers (2021-12-15T15:17:02Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Estimating and Evaluating Regression Predictive Uncertainty in Deep
Object Detectors [9.273998041238224]
We show that training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions.
We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions.
arXiv Detail & Related papers (2021-01-13T12:53:54Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.