Confidence Calibration for Object Detection and Segmentation
- URL: http://arxiv.org/abs/2202.12785v2
- Date: Tue, 1 Mar 2022 14:35:34 GMT
- Title: Confidence Calibration for Object Detection and Segmentation
- Authors: Fabian K\"uppers, Anselm Haselhoff, Jan Kronenberger, Jonas Schneider
- Abstract summary: This chapter focuses on the investigation of confidence calibration for object detection and segmentation models.
We introduce the concept of multivariate confidence calibration that is an extension of well-known calibration methods.
We show that especially object detection as well as instance segmentation models are intrinsically miscalibrated.
- Score: 6.700433100198165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Calibrated confidence estimates obtained from neural networks are crucial,
particularly for safety-critical applications such as autonomous driving or
medical image diagnosis. However, although the task of confidence calibration
has been investigated on classification problems, thorough investigations on
object detection and segmentation problems are still missing. Therefore, we
focus on the investigation of confidence calibration for object detection and
segmentation models in this chapter. We introduce the concept of multivariate
confidence calibration that is an extension of well-known calibration methods
to the task of object detection and segmentation. This allows for an extended
confidence calibration that is also aware of additional features such as
bounding box/pixel position, shape information, etc. Furthermore, we extend the
expected calibration error (ECE) to measure miscalibration of object detection
and segmentation models. We examine several network architectures on MS COCO as
well as on Cityscapes and show that especially object detection as well as
instance segmentation models are intrinsically miscalibrated given the
introduced definition of calibration. Using our proposed calibration methods,
we have been able to improve calibration so that it also has a positive impact
on the quality of segmentation masks as well.
Related papers
- Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Beyond Classification: Definition and Density-based Estimation of
Calibration in Object Detection [15.71719154574049]
We tackle the challenge of defining and estimating calibration error for deep neural networks (DNNs)
In particular, we adapt the definition of classification calibration error to handle the nuances associated with object detection.
We propose a consistent and differentiable estimator of the detection calibration error, utilizing kernel density estimation.
arXiv Detail & Related papers (2023-12-11T18:57:05Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - Uncertainty Calibration and its Application to Object Detection [0.0]
In this work, we examine the semantic uncertainty (which object type?) as well as the spatial uncertainty.
We evaluate if the predicted uncertainties of an object detection model match with the observed error that is achieved on real-world data.
arXiv Detail & Related papers (2023-02-06T08:41:07Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Variable-Based Calibration for Machine Learning Classifiers [11.9995808096481]
We introduce the notion of variable-based calibration to characterize calibration properties of a model.
We find that models with near-perfect expected calibration error can exhibit significant miscalibration as a function of features of the data.
arXiv Detail & Related papers (2022-09-30T00:49:31Z) - DOMINO: Domain-aware Model Calibration in Medical Image Segmentation [51.346121016559024]
Modern deep neural networks are poorly calibrated, compromising trustworthiness and reliability.
We propose DOMINO, a domain-aware model calibration method that leverages the semantic confusability and hierarchical similarity between class labels.
Our results show that DOMINO-calibrated deep neural networks outperform non-calibrated models and state-of-the-art morphometric methods in head image segmentation.
arXiv Detail & Related papers (2022-09-13T15:31:52Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Multivariate Confidence Calibration for Object Detection [7.16879432974126]
We present a novel framework to measure and calibrate biased confidence estimates of object detection methods.
Our approach allows, for the first time, to obtain calibrated confidence estimates with respect to image location and box scale.
We show that our developed methods outperform state-of-the-art calibration models for the task of object detection.
arXiv Detail & Related papers (2020-04-28T14:17:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.