Multivariate Confidence Calibration for Object Detection
- URL: http://arxiv.org/abs/2004.13546v1
- Date: Tue, 28 Apr 2020 14:17:41 GMT
- Title: Multivariate Confidence Calibration for Object Detection
- Authors: Fabian K\"uppers, Jan Kronenberger, Amirhossein Shantia, Anselm
Haselhoff
- Abstract summary: We present a novel framework to measure and calibrate biased confidence estimates of object detection methods.
Our approach allows, for the first time, to obtain calibrated confidence estimates with respect to image location and box scale.
We show that our developed methods outperform state-of-the-art calibration models for the task of object detection.
- Score: 7.16879432974126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unbiased confidence estimates of neural networks are crucial especially for
safety-critical applications. Many methods have been developed to calibrate
biased confidence estimates. Though there is a variety of methods for
classification, the field of object detection has not been addressed yet.
Therefore, we present a novel framework to measure and calibrate biased (or
miscalibrated) confidence estimates of object detection methods. The main
difference to related work in the field of classifier calibration is that we
also use additional information of the regression output of an object detector
for calibration. Our approach allows, for the first time, to obtain calibrated
confidence estimates with respect to image location and box scale. In addition,
we propose a new measure to evaluate miscalibration of object detectors.
Finally, we show that our developed methods outperform state-of-the-art
calibration models for the task of object detection and provides reliable
confidence estimates across different locations and scales.
Related papers
- Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Beyond Classification: Definition and Density-based Estimation of
Calibration in Object Detection [15.71719154574049]
We tackle the challenge of defining and estimating calibration error for deep neural networks (DNNs)
In particular, we adapt the definition of classification calibration error to handle the nuances associated with object detection.
We propose a consistent and differentiable estimator of the detection calibration error, utilizing kernel density estimation.
arXiv Detail & Related papers (2023-12-11T18:57:05Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Two Sides of Miscalibration: Identifying Over and Under-Confidence
Prediction for Network Calibration [1.192436948211501]
Proper confidence calibration of deep neural networks is essential for reliable predictions in safety-critical tasks.
Miscalibration can lead to model over-confidence and/or under-confidence.
We introduce a novel metric, a miscalibration score, to identify the overall and class-wise calibration status.
We use the class-wise miscalibration score as a proxy to design a calibration technique that can tackle both over and under-confidence.
arXiv Detail & Related papers (2023-08-06T17:59:14Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Uncertainty Calibration and its Application to Object Detection [0.0]
In this work, we examine the semantic uncertainty (which object type?) as well as the spatial uncertainty.
We evaluate if the predicted uncertainties of an object detection model match with the observed error that is achieved on real-world data.
arXiv Detail & Related papers (2023-02-06T08:41:07Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Confidence Calibration for Object Detection and Segmentation [6.700433100198165]
This chapter focuses on the investigation of confidence calibration for object detection and segmentation models.
We introduce the concept of multivariate confidence calibration that is an extension of well-known calibration methods.
We show that especially object detection as well as instance segmentation models are intrinsically miscalibrated.
arXiv Detail & Related papers (2022-02-25T15:59:51Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.