Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection
- URL: http://arxiv.org/abs/2303.14404v1
- Date: Sat, 25 Mar 2023 08:56:21 GMT
- Title: Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection
- Authors: Muhammad Akhtar Munir and Muhammad Haris Khan and Salman Khan and
Fahad Shahbaz Khan
- Abstract summary: We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
- Score: 58.789823426981044
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have enabled astounding progress in several
vision-based problems. Despite showing high predictive accuracy, recently,
several works have revealed that they tend to provide overconfident predictions
and thus are poorly calibrated. The majority of the works addressing the
miscalibration of DNNs fall under the scope of classification and consider only
in-domain predictions. However, there is little to no progress in studying the
calibration of DNN-based object detection models, which are central to many
vision-based safety-critical applications. In this paper, inspired by the
train-time calibration methods, we propose a novel auxiliary loss formulation
that explicitly aims to align the class confidence of bounding boxes with the
accurateness of predictions (i.e. precision). Since the original formulation of
our loss depends on the counts of true positives and false positives in a
minibatch, we develop a differentiable proxy of our loss that can be used
during training with other application-specific loss functions. We perform
extensive experiments on challenging in-domain and out-domain scenarios with
six benchmark datasets including MS-COCO, Cityscapes, Sim10k, and BDD100k. Our
results reveal that our train-time loss surpasses strong calibration baselines
in reducing calibration error for both in and out-domain scenarios. Our source
code and pre-trained models are available at
https://github.com/akhtarvision/bpc_calibration
Related papers
- Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Multiclass Confidence and Localization Calibration for Object Detection [4.119048608751183]
Deep neural networks (DNNs) tend to make overconfident predictions, rendering them poorly calibrated.
We propose a new train-time technique for calibrating modern object detection methods.
arXiv Detail & Related papers (2023-06-14T06:14:16Z) - Beyond calibration: estimating the grouping loss of modern neural
networks [68.8204255655161]
Proper scoring rule theory shows that given the calibration loss, the missing piece to characterize individual errors is the grouping loss.
We show that modern neural network architectures in vision and NLP exhibit grouping loss, notably in distribution shifts settings.
arXiv Detail & Related papers (2022-10-28T07:04:20Z) - On the Dark Side of Calibration for Modern Neural Networks [65.83956184145477]
We show the breakdown of expected calibration error (ECE) into predicted confidence and refinement.
We highlight that regularisation based calibration only focuses on naively reducing a model's confidence.
We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement.
arXiv Detail & Related papers (2021-06-17T11:04:14Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - On Calibration of Mixup Training for Deep Neural Networks [1.6242924916178283]
We argue and provide empirical evidence that, due to its fundamentals, Mixup does not necessarily improve calibration.
Our loss is inspired by Bayes decision theory and introduces a new training framework for designing losses for probabilistic modelling.
We provide state-of-the-art accuracy with consistent improvements in calibration performance.
arXiv Detail & Related papers (2020-03-22T16:54:31Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.