Beyond Classification: Definition and Density-based Estimation of
Calibration in Object Detection
- URL: http://arxiv.org/abs/2312.06645v1
- Date: Mon, 11 Dec 2023 18:57:05 GMT
- Title: Beyond Classification: Definition and Density-based Estimation of
Calibration in Object Detection
- Authors: Teodora Popordanoska, Aleksei Tiulpin, Matthew B. Blaschko
- Abstract summary: We tackle the challenge of defining and estimating calibration error for deep neural networks (DNNs)
In particular, we adapt the definition of classification calibration error to handle the nuances associated with object detection.
We propose a consistent and differentiable estimator of the detection calibration error, utilizing kernel density estimation.
- Score: 15.71719154574049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite their impressive predictive performance in various computer vision
tasks, deep neural networks (DNNs) tend to make overly confident predictions,
which hinders their widespread use in safety-critical applications. While there
have been recent attempts to calibrate DNNs, most of these efforts have
primarily been focused on classification tasks, thus neglecting DNN-based
object detectors. Although several recent works addressed calibration for
object detection and proposed differentiable penalties, none of them are
consistent estimators of established concepts in calibration. In this work, we
tackle the challenge of defining and estimating calibration error specifically
for this task. In particular, we adapt the definition of classification
calibration error to handle the nuances associated with object detection, and
predictions in structured output spaces more generally. Furthermore, we propose
a consistent and differentiable estimator of the detection calibration error,
utilizing kernel density estimation. Our experiments demonstrate the
effectiveness of our estimator against competing train-time and post-hoc
calibration methods, while maintaining similar detection performance.
Related papers
- Towards Certification of Uncertainty Calibration under Adversarial Attacks [96.48317453951418]
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
We propose novel calibration attacks and demonstrate how they can improve model calibration through textitadversarial calibration training
arXiv Detail & Related papers (2024-05-22T18:52:09Z) - Calibrated Uncertainty Quantification for Operator Learning via
Conformal Prediction [95.75771195913046]
We propose a risk-controlling quantile neural operator, a distribution-free, finite-sample functional calibration conformal prediction method.
We provide a theoretical calibration guarantee on the coverage rate, defined as the expected percentage of points on the function domain.
Empirical results on a 2D Darcy flow and a 3D car surface pressure prediction task validate our theoretical results.
arXiv Detail & Related papers (2024-02-02T23:43:28Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Multiclass Confidence and Localization Calibration for Object Detection [4.119048608751183]
Deep neural networks (DNNs) tend to make overconfident predictions, rendering them poorly calibrated.
We propose a new train-time technique for calibrating modern object detection methods.
arXiv Detail & Related papers (2023-06-14T06:14:16Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Confidence Calibration for Object Detection and Segmentation [6.700433100198165]
This chapter focuses on the investigation of confidence calibration for object detection and segmentation models.
We introduce the concept of multivariate confidence calibration that is an extension of well-known calibration methods.
We show that especially object detection as well as instance segmentation models are intrinsically miscalibrated.
arXiv Detail & Related papers (2022-02-25T15:59:51Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z) - Multivariate Confidence Calibration for Object Detection [7.16879432974126]
We present a novel framework to measure and calibrate biased confidence estimates of object detection methods.
Our approach allows, for the first time, to obtain calibrated confidence estimates with respect to image location and box scale.
We show that our developed methods outperform state-of-the-art calibration models for the task of object detection.
arXiv Detail & Related papers (2020-04-28T14:17:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.