Consistency-Guided Temperature Scaling Using Style and Content
Information for Out-of-Domain Calibration
- URL: http://arxiv.org/abs/2402.15019v1
- Date: Thu, 22 Feb 2024 23:36:18 GMT
- Title: Consistency-Guided Temperature Scaling Using Style and Content
Information for Out-of-Domain Calibration
- Authors: Wonjeong Choi, Jungwuk Park, Dong-Jun Han, Younghyun Park, Jaekyun
Moon
- Abstract summary: We propose consistency-guided temperature scaling (CTS) to enhance out-of-domain calibration performance.
We take consistencies into account in terms of two different aspects -- style and content -- which are the key components that can well-represent data samples in multi-domain settings.
This can be accomplished by employing only the source domains without compromising accuracy, making our scheme directly applicable to various trustworthy AI systems.
- Score: 24.89907794192497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research interests in the robustness of deep neural networks against domain
shifts have been rapidly increasing in recent years. Most existing works,
however, focus on improving the accuracy of the model, not the calibration
performance which is another important requirement for trustworthy AI systems.
Temperature scaling (TS), an accuracy-preserving post-hoc calibration method,
has been proven to be effective in in-domain settings, but not in out-of-domain
(OOD) due to the difficulty in obtaining a validation set for the unseen domain
beforehand. In this paper, we propose consistency-guided temperature scaling
(CTS), a new temperature scaling strategy that can significantly enhance the
OOD calibration performance by providing mutual supervision among data samples
in the source domains. Motivated by our observation that over-confidence
stemming from inconsistent sample predictions is the main obstacle to OOD
calibration, we propose to guide the scaling process by taking consistencies
into account in terms of two different aspects -- style and content -- which
are the key components that can well-represent data samples in multi-domain
settings. Experimental results demonstrate that our proposed strategy
outperforms existing works, achieving superior OOD calibration performance on
various datasets. This can be accomplished by employing only the source domains
without compromising accuracy, making our scheme directly applicable to various
trustworthy AI systems.
Related papers
- Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Expectation consistency for calibration of neural networks [24.073221004661427]
We introduce a novel calibration technique named expectation consistency (EC)
EC enforces that the average validation confidence coincides with the average proportion of correct labels.
We discuss examples where EC significantly outperforms temperature scaling.
arXiv Detail & Related papers (2023-03-05T11:21:03Z) - Beyond In-Domain Scenarios: Robust Density-Aware Calibration [48.00374886504513]
Calibrating deep learning models to yield uncertainty-aware predictions is crucial as deep neural networks get increasingly deployed in safety-critical applications.
We propose DAC, an accuracy-preserving as well as Density-Aware method based on k-nearest-neighbors (KNN)
We show that DAC boosts the robustness of calibration performance in domain-shift and OOD, while maintaining excellent in-domain predictive uncertainty estimates.
arXiv Detail & Related papers (2023-02-10T08:48:32Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Privacy Preserving Recalibration under Domain Shift [119.21243107946555]
We introduce a framework that abstracts out the properties of recalibration problems under differential privacy constraints.
We also design a novel recalibration algorithm, accuracy temperature scaling, that outperforms prior work on private datasets.
arXiv Detail & Related papers (2020-08-21T18:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.