Parametric and Multivariate Uncertainty Calibration for Regression and
Object Detection
- URL: http://arxiv.org/abs/2207.01242v1
- Date: Mon, 4 Jul 2022 08:00:20 GMT
- Title: Parametric and Multivariate Uncertainty Calibration for Regression and
Object Detection
- Authors: Fabian K\"uppers, Jonas Schneider, Anselm Haselhoff
- Abstract summary: We show that common detection models overestimate the spatial uncertainty in comparison to the observed error.
Our experiments show that the simple Isotonic Regression recalibration method is sufficient to achieve a good calibrated uncertainty.
In contrast, if normal distributions are required for subsequent processes, our GP-Normal recalibration method yields the best results.
- Score: 4.630093015127541
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable spatial uncertainty evaluation of object detection models is of
special interest and has been subject of recent work. In this work, we review
the existing definitions for uncertainty calibration of probabilistic
regression tasks. We inspect the calibration properties of common detection
networks and extend state-of-the-art recalibration methods. Our methods use a
Gaussian process (GP) recalibration scheme that yields parametric distributions
as output (e.g. Gaussian or Cauchy). The usage of GP recalibration allows for a
local (conditional) uncertainty calibration by capturing dependencies between
neighboring samples. The use of parametric distributions such as as Gaussian
allows for a simplified adaption of calibration in subsequent processes, e.g.,
for Kalman filtering in the scope of object tracking.
In addition, we use the GP recalibration scheme to perform covariance
estimation which allows for post-hoc introduction of local correlations between
the output quantities, e.g., position, width, or height in object detection. To
measure the joint calibration of multivariate and possibly correlated data, we
introduce the quantile calibration error which is based on the Mahalanobis
distance between the predicted distribution and the ground truth to determine
whether the ground truth is within a predicted quantile.
Our experiments show that common detection models overestimate the spatial
uncertainty in comparison to the observed error. We show that the simple
Isotonic Regression recalibration method is sufficient to achieve a good
uncertainty quantification in terms of calibrated quantiles. In contrast, if
normal distributions are required for subsequent processes, our GP-Normal
recalibration method yields the best results. Finally, we show that our
covariance estimation method is able to achieve best calibration results for
joint multivariate calibration.
Related papers
- Orthogonal Causal Calibration [55.28164682911196]
We prove generic upper bounds on the calibration error of any causal parameter estimate $theta$ with respect to any loss $ell$.
We use our bound to analyze the convergence of two sample splitting algorithms for causal calibration.
arXiv Detail & Related papers (2024-06-04T03:35:25Z) - Consistent and Asymptotically Unbiased Estimation of Proper Calibration
Errors [23.819464242327257]
We propose a method that allows consistent estimation of all proper calibration errors and refinement terms.
We prove the relation between refinement and f-divergences, which implies information monotonicity in neural networks.
Our experiments validate the claimed properties of the proposed estimator and suggest that the selection of a post-hoc calibration method should be determined by the particular calibration error of interest.
arXiv Detail & Related papers (2023-12-14T01:20:08Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Distribution-Free Model-Agnostic Regression Calibration via
Nonparametric Methods [9.662269016653296]
We consider an individual calibration objective for characterizing the quantiles of the prediction model.
Existing methods have been largely and lack of statistical guarantee in terms of individual calibration.
We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model.
arXiv Detail & Related papers (2023-05-20T21:31:51Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - A Consistent and Differentiable Lp Canonical Calibration Error Estimator [21.67616079217758]
Deep neural networks are poorly calibrated and tend to output overconfident predictions.
We propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates.
Our method has a natural choice of kernel, and can be used to generate consistent estimates of other quantities.
arXiv Detail & Related papers (2022-10-13T15:11:11Z) - Localized Calibration: Metrics and Recalibration [133.07044916594361]
We propose a fine-grained calibration metric that spans the gap between fully global and fully individualized calibration.
We then introduce a localized recalibration method, LoRe, that improves the LCE better than existing recalibration methods.
arXiv Detail & Related papers (2021-02-22T07:22:12Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.