MetaDetect: Uncertainty Quantification and Prediction Quality Estimates
for Object Detection
- URL: http://arxiv.org/abs/2010.01695v2
- Date: Tue, 6 Oct 2020 15:38:53 GMT
- Title: MetaDetect: Uncertainty Quantification and Prediction Quality Estimates
for Object Detection
- Authors: Marius Schubert, Karsten Kahl, Matthias Rottmann
- Abstract summary: In object detection with deep neural networks, the box-wise objectness score tends to be overconfident.
We present a post processing method that for any given neural network provides predictive uncertainty estimates and quality estimates.
- Score: 6.230751621285322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In object detection with deep neural networks, the box-wise objectness score
tends to be overconfident, sometimes even indicating high confidence in
presence of inaccurate predictions. Hence, the reliability of the prediction
and therefore reliable uncertainties are of highest interest. In this work, we
present a post processing method that for any given neural network provides
predictive uncertainty estimates and quality estimates. These estimates are
learned by a post processing model that receives as input a hand-crafted set of
transparent metrics in form of a structured dataset. Therefrom, we learn two
tasks for predicted bounding boxes. We discriminate between true positives
($\mathit{IoU}\geq0.5$) and false positives ($\mathit{IoU} < 0.5$) which we
term meta classification, and we predict $\mathit{IoU}$ values directly which
we term meta regression. The probabilities of the meta classification model aim
at learning the probabilities of success and failure and therefore provide a
modelled predictive uncertainty estimate. On the other hand, meta regression
gives rise to a quality estimate. In numerical experiments, we use the publicly
available YOLOv3 network and the Faster-RCNN network and evaluate meta
classification and regression performance on the Kitti, Pascal VOC and COCO
datasets. We demonstrate that our metrics are indeed well correlated with the
$\mathit{IoU}$. For meta classification we obtain classification accuracies of
up to 98.92% and AUROCs of up to 99.93%. For meta regression we obtain an $R^2$
value of up to 91.78%. These results yield significant improvements compared to
other network's objectness score and other baseline approaches. Therefore, we
obtain more reliable uncertainty and quality estimates which is particularly
interesting in the absence of ground truth.
Related papers
- Estimating Uncertainty with Implicit Quantile Network [0.0]
Uncertainty quantification is an important part of many performance critical applications.
This paper provides a simple alternative to existing approaches such as ensemble learning and bayesian neural networks.
arXiv Detail & Related papers (2024-08-26T13:33:14Z) - Extracting Usable Predictions from Quantized Networks through
Uncertainty Quantification for OOD Detection [0.0]
OOD detection has become more pertinent with advances in network design and increased task complexity.
We introduce an Uncertainty Quantification(UQ) technique to quantify the uncertainty in the predictions from a pre-trained vision model.
We observe that our technique saves up to 80% of ignored samples from being misclassified.
arXiv Detail & Related papers (2024-03-02T03:03:29Z) - $p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of
Deep Representations [32.99800144249333]
We introduce $p$-DkNN, a novel inference procedure that takes a trained deep neural network and analyzes the similarity structures of its intermediate hidden representations.
We find that $p$-DkNN forces adaptive attackers crafting adversarial examples, a form of worst-case OOD inputs, to introduce semantically meaningful changes to the inputs.
arXiv Detail & Related papers (2022-07-25T21:42:08Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Estimating and Evaluating Regression Predictive Uncertainty in Deep
Object Detectors [9.273998041238224]
We show that training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions.
We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions.
arXiv Detail & Related papers (2021-01-13T12:53:54Z) - Second-Moment Loss: A Novel Regression Objective for Improved
Uncertainties [7.766663822644739]
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
arXiv Detail & Related papers (2020-12-23T14:17:33Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Probabilistic Regression for Visual Tracking [193.05958682821444]
We propose a probabilistic regression formulation and apply it to tracking.
Our network predicts the conditional probability density of the target state given an input image.
Our tracker sets a new state-of-the-art on six datasets, achieving 59.8% AUC on LaSOT and 75.8% Success on TrackingNet.
arXiv Detail & Related papers (2020-03-27T17:58:37Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.