Multiclass Confidence and Localization Calibration for Object Detection
- URL: http://arxiv.org/abs/2306.08271v1
- Date: Wed, 14 Jun 2023 06:14:16 GMT
- Title: Multiclass Confidence and Localization Calibration for Object Detection
- Authors: Bimsara Pathiraja, Malitha Gunawardhana, Muhammad Haris Khan
- Abstract summary: Deep neural networks (DNNs) tend to make overconfident predictions, rendering them poorly calibrated.
We propose a new train-time technique for calibrating modern object detection methods.
- Score: 4.119048608751183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Albeit achieving high predictive accuracy across many challenging computer
vision problems, recent studies suggest that deep neural networks (DNNs) tend
to make overconfident predictions, rendering them poorly calibrated. Most of
the existing attempts for improving DNN calibration are limited to
classification tasks and restricted to calibrating in-domain predictions.
Surprisingly, very little to no attempts have been made in studying the
calibration of object detection methods, which occupy a pivotal space in
vision-based security-sensitive, and safety-critical applications. In this
paper, we propose a new train-time technique for calibrating modern object
detection methods. It is capable of jointly calibrating multiclass confidence
and box localization by leveraging their predictive uncertainties. We perform
extensive experiments on several in-domain and out-of-domain detection
benchmarks. Results demonstrate that our proposed train-time calibration method
consistently outperforms several baselines in reducing calibration error for
both in-domain and out-of-domain predictions. Our code and models are available
at https://github.com/bimsarapathiraja/MCCL.
Related papers
- Beyond Classification: Definition and Density-based Estimation of
Calibration in Object Detection [15.71719154574049]
We tackle the challenge of defining and estimating calibration error for deep neural networks (DNNs)
In particular, we adapt the definition of classification calibration error to handle the nuances associated with object detection.
We propose a consistent and differentiable estimator of the detection calibration error, utilizing kernel density estimation.
arXiv Detail & Related papers (2023-12-11T18:57:05Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Beyond In-Domain Scenarios: Robust Density-Aware Calibration [48.00374886504513]
Calibrating deep learning models to yield uncertainty-aware predictions is crucial as deep neural networks get increasingly deployed in safety-critical applications.
We propose DAC, an accuracy-preserving as well as Density-Aware method based on k-nearest-neighbors (KNN)
We show that DAC boosts the robustness of calibration performance in domain-shift and OOD, while maintaining excellent in-domain predictive uncertainty estimates.
arXiv Detail & Related papers (2023-02-10T08:48:32Z) - On Calibrating Semantic Segmentation Models: Analyses and An Algorithm [51.85289816613351]
We study the problem of semantic segmentation calibration.
Model capacity, crop size, multi-scale testing, and prediction correctness have impact on calibration.
We propose a simple, unifying, and effective approach, namely selective scaling.
arXiv Detail & Related papers (2022-12-22T22:05:16Z) - Towards Improving Calibration in Object Detection Under Domain Shift [9.828212203380133]
We study the calibration of current object detection models, particularly under domain shift.
We introduce a plug-and-play train-time calibration loss for object detection.
Second, we devise a new uncertainty mechanism for object detection which can implicitly calibrate the commonly used self-training based domain adaptive detectors.
arXiv Detail & Related papers (2022-09-15T20:32:28Z) - On the Dark Side of Calibration for Modern Neural Networks [65.83956184145477]
We show the breakdown of expected calibration error (ECE) into predicted confidence and refinement.
We highlight that regularisation based calibration only focuses on naively reducing a model's confidence.
We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement.
arXiv Detail & Related papers (2021-06-17T11:04:14Z) - Post-hoc Uncertainty Calibration for Domain Drift Scenarios [46.88826364244423]
We show that existing post-hoc calibration methods yield highly over-confident predictions under domain shift.
We introduce a simple strategy where perturbations are applied to samples in the validation set before performing the post-hoc calibration step.
arXiv Detail & Related papers (2020-12-20T18:21:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.