Estimating and Evaluating Regression Predictive Uncertainty in Deep
Object Detectors
- URL: http://arxiv.org/abs/2101.05036v3
- Date: Fri, 12 Mar 2021 18:16:36 GMT
- Title: Estimating and Evaluating Regression Predictive Uncertainty in Deep
Object Detectors
- Authors: Ali Harakeh and Steven L. Waslander
- Abstract summary: We show that training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions.
We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions.
- Score: 9.273998041238224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive uncertainty estimation is an essential next step for the reliable
deployment of deep object detectors in safety-critical tasks. In this work, we
focus on estimating predictive distributions for bounding box regression output
with variance networks. We show that in the context of object detection,
training variance networks with negative log likelihood (NLL) can lead to high
entropy predictive distributions regardless of the correctness of the output
mean. We propose to use the energy score as a non-local proper scoring rule and
find that when used for training, the energy score leads to better calibrated
and lower entropy predictive distributions than NLL. We also address the
widespread use of non-proper scoring metrics for evaluating predictive
distributions from deep object detectors by proposing an alternate evaluation
approach founded on proper scoring rules. Using the proposed evaluation tools,
we show that although variance networks can be used to produce high quality
predictive distributions, ad-hoc approaches used by seminal object detectors
for choosing regression targets during training do not provide wide enough data
support for reliable variance learning. We hope that our work helps shift
evaluation in probabilistic object detection to better align with predictive
uncertainty evaluation in other machine learning domains. Code for all models,
evaluation, and datasets is available at:
https://github.com/asharakeh/probdet.git.
Related papers
- Towards Robust and Interpretable EMG-based Hand Gesture Recognition using Deep Metric Meta Learning [37.21211404608413]
We propose a shift to deep metric-based meta-learning in EMG PR to supervise the creation of meaningful and interpretable representations.
We derive a robust class proximity-based confidence estimator that leads to a better rejection of incorrect decisions.
arXiv Detail & Related papers (2024-04-17T23:37:50Z) - LMD: Light-weight Prediction Quality Estimation for Object Detection in
Lidar Point Clouds [3.927702899922668]
Object detection on Lidar point cloud data is a promising technology for autonomous driving and robotics.
Uncertainty estimation is a crucial component for down-stream tasks and deep neural networks remain error-prone even for predictions with high confidence.
We propose LidarMetaDetect, a light-weight post-processing scheme for prediction quality estimation.
Our experiments show a significant increase of statistical reliability in separating true from false predictions.
arXiv Detail & Related papers (2023-06-13T15:13:29Z) - Object Detection as Probabilistic Set Prediction [3.7599363231894176]
We present a proper scoring rule for evaluating and training probabilistic object detectors.
Our results indicate that the training of existing detectors is optimized toward non-probabilistic metrics.
arXiv Detail & Related papers (2022-03-15T15:13:52Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Uncertainty Estimation and Sample Selection for Crowd Counting [87.29137075538213]
We present a method for image-based crowd counting that can predict a crowd density map together with the uncertainty values pertaining to the predicted density map.
A key advantage of our method over existing crowd counting methods is its ability to quantify the uncertainty of its predictions.
We show that our sample selection strategy drastically reduces the amount of labeled data needed to adapt a counting network trained on a source domain to the target domain.
arXiv Detail & Related papers (2020-09-30T03:40:07Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.