Beyond In-Domain Scenarios: Robust Density-Aware Calibration
- URL: http://arxiv.org/abs/2302.05118v2
- Date: Tue, 4 Jul 2023 16:46:16 GMT
- Title: Beyond In-Domain Scenarios: Robust Density-Aware Calibration
- Authors: Christian Tomani, Futa Waseda, Yuesong Shen and Daniel Cremers
- Abstract summary: Calibrating deep learning models to yield uncertainty-aware predictions is crucial as deep neural networks get increasingly deployed in safety-critical applications.
We propose DAC, an accuracy-preserving as well as Density-Aware method based on k-nearest-neighbors (KNN)
We show that DAC boosts the robustness of calibration performance in domain-shift and OOD, while maintaining excellent in-domain predictive uncertainty estimates.
- Score: 48.00374886504513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Calibrating deep learning models to yield uncertainty-aware predictions is
crucial as deep neural networks get increasingly deployed in safety-critical
applications. While existing post-hoc calibration methods achieve impressive
results on in-domain test datasets, they are limited by their inability to
yield reliable uncertainty estimates in domain-shift and out-of-domain (OOD)
scenarios. We aim to bridge this gap by proposing DAC, an accuracy-preserving
as well as Density-Aware Calibration method based on k-nearest-neighbors (KNN).
In contrast to existing post-hoc methods, we utilize hidden layers of
classifiers as a source for uncertainty-related information and study their
importance. We show that DAC is a generic method that can readily be combined
with state-of-the-art post-hoc methods. DAC boosts the robustness of
calibration performance in domain-shift and OOD, while maintaining excellent
in-domain predictive uncertainty estimates. We demonstrate that DAC leads to
consistently better calibration across a large number of model architectures,
datasets, and metrics. Additionally, we show that DAC improves calibration
substantially on recent large-scale neural networks pre-trained on vast amounts
of data.
Related papers
- Consistency Calibration: Improving Uncertainty Calibration via Consistency among Perturbed Neighbors [22.39558434131574]
We introduce the concept of consistency as an alternative perspective on model calibration.
We propose a post-hoc calibration method called Consistency (CC) which adjusts confidence based on the model's consistency across inputs.
We show that performing perturbations at the logit level significantly improves computational efficiency.
arXiv Detail & Related papers (2024-10-16T06:55:02Z) - Consistency-Guided Temperature Scaling Using Style and Content
Information for Out-of-Domain Calibration [24.89907794192497]
We propose consistency-guided temperature scaling (CTS) to enhance out-of-domain calibration performance.
We take consistencies into account in terms of two different aspects -- style and content -- which are the key components that can well-represent data samples in multi-domain settings.
This can be accomplished by employing only the source domains without compromising accuracy, making our scheme directly applicable to various trustworthy AI systems.
arXiv Detail & Related papers (2024-02-22T23:36:18Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Multiclass Confidence and Localization Calibration for Object Detection [4.119048608751183]
Deep neural networks (DNNs) tend to make overconfident predictions, rendering them poorly calibrated.
We propose a new train-time technique for calibrating modern object detection methods.
arXiv Detail & Related papers (2023-06-14T06:14:16Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Improving Uncertainty Calibration of Deep Neural Networks via Truth
Discovery and Geometric Optimization [22.57474734944132]
We propose a truth discovery framework to integrate ensemble-based and post-hoc calibration methods.
On large-scale datasets including CIFAR and ImageNet, our method shows consistent improvement against state-of-the-art calibration approaches.
arXiv Detail & Related papers (2021-06-25T06:44:16Z) - Transferable Calibration with Lower Bias and Variance in Domain
Adaptation [139.4332115349543]
Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one.
How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios.
TransCal can be easily applied to recalibrate existing DA methods.
arXiv Detail & Related papers (2020-07-16T11:09:36Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.