Inferring bias and uncertainty in camera calibration
- URL: http://arxiv.org/abs/2107.13484v1
- Date: Wed, 28 Jul 2021 16:49:39 GMT
- Title: Inferring bias and uncertainty in camera calibration
- Authors: Annika Hagemann, Moritz Knorr, Holger Janssen, Christoph Stiller
- Abstract summary: We introduce an evaluation scheme to capture the fundamental error sources in camera calibration.
The bias detection method uncovers smallest systematic errors and reveals imperfections of the calibration setup.
A novel re-sampling-based uncertainty estimator enables uncertainty estimation under non-ideal conditions.
We derive a simple uncertainty metric that is independent of the camera model.
- Score: 2.11622808613962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate camera calibration is a precondition for many computer vision
applications. Calibration errors, such as wrong model assumptions or imprecise
parameter estimation, can deteriorate a system's overall performance, making
the reliable detection and quantification of these errors critical. In this
work, we introduce an evaluation scheme to capture the fundamental error
sources in camera calibration: systematic errors (biases) and uncertainty
(variance). The proposed bias detection method uncovers smallest systematic
errors and thereby reveals imperfections of the calibration setup and provides
the basis for camera model selection. A novel resampling-based uncertainty
estimator enables uncertainty estimation under non-ideal conditions and thereby
extends the classical covariance estimator. Furthermore, we derive a simple
uncertainty metric that is independent of the camera model. In combination, the
proposed methods can be used to assess the accuracy of individual calibrations,
but also to benchmark new calibration algorithms, camera models, or calibration
setups. We evaluate the proposed methods with simulations and real cameras.
Related papers
- Optimizing Estimators of Squared Calibration Errors in Classification [2.3020018305241337]
We propose a mean-squared error-based risk that enables the comparison and optimization of estimators of squared calibration errors.
Our approach advocates for a training-validation-testing pipeline when estimating a calibration error.
arXiv Detail & Related papers (2024-10-09T15:58:06Z) - Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Consistent and Asymptotically Unbiased Estimation of Proper Calibration
Errors [23.819464242327257]
We propose a method that allows consistent estimation of all proper calibration errors and refinement terms.
We prove the relation between refinement and f-divergences, which implies information monotonicity in neural networks.
Our experiments validate the claimed properties of the proposed estimator and suggest that the selection of a post-hoc calibration method should be determined by the particular calibration error of interest.
arXiv Detail & Related papers (2023-12-14T01:20:08Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Localized Calibration: Metrics and Recalibration [133.07044916594361]
We propose a fine-grained calibration metric that spans the gap between fully global and fully individualized calibration.
We then introduce a localized recalibration method, LoRe, that improves the LCE better than existing recalibration methods.
arXiv Detail & Related papers (2021-02-22T07:22:12Z) - Improving model calibration with accuracy versus uncertainty
optimization [17.056768055368384]
A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
arXiv Detail & Related papers (2020-12-14T20:19:21Z) - Zero-Shot Calibration of Fisheye Cameras [0.010956300138340428]
The proposed method estimates camera parameters from the horizontal and vertical field of view information of the camera without any image acquisition.
The method is particularly useful for wide-angle or fisheye cameras that have large image distortion.
arXiv Detail & Related papers (2020-11-30T08:10:24Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.