Calibration Meets Explanation: A Simple and Effective Approach for Model
Confidence Estimates
- URL: http://arxiv.org/abs/2211.03041v1
- Date: Sun, 6 Nov 2022 06:17:21 GMT
- Title: Calibration Meets Explanation: A Simple and Effective Approach for Model
Confidence Estimates
- Authors: Dongfang Li, Baotian Hu, Qingcai Chen
- Abstract summary: We propose a method named CME that leverages model explanations to make the model less confident with non-inductive attributions.
We conduct extensive experiments on six datasets with two popular pre-trained language models.
Our findings highlight that model explanations can help calibrate posterior estimates.
- Score: 21.017890579840145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Calibration strengthens the trustworthiness of black-box models by producing
better accurate confidence estimates on given examples. However, little is
known about if model explanations can help confidence calibration. Intuitively,
humans look at important features attributions and decide whether the model is
trustworthy. Similarly, the explanations can tell us when the model may or may
not know. Inspired by this, we propose a method named CME that leverages model
explanations to make the model less confident with non-inductive attributions.
The idea is that when the model is not highly confident, it is difficult to
identify strong indications of any class, and the tokens accordingly do not
have high attribution scores for any class and vice versa. We conduct extensive
experiments on six datasets with two popular pre-trained language models in the
in-domain and out-of-domain settings. The results show that CME improves
calibration performance in all settings. The expected calibration errors are
further reduced when combined with temperature scaling. Our findings highlight
that model explanations can help calibrate posterior estimates.
Related papers
- Calibrating Large Language Models with Sample Consistency [76.23956851098598]
We explore the potential of deriving confidence from the distribution of multiple randomly sampled model generations, via three measures of consistency.
Results show that consistency-based calibration methods outperform existing post-hoc approaches.
We offer practical guidance on choosing suitable consistency metrics for calibration, tailored to the characteristics of various LMs.
arXiv Detail & Related papers (2024-02-21T16:15:20Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Calibration in Deep Learning: A Survey of the State-of-the-Art [7.6087138685470945]
Calibrating deep neural models plays an important role in building reliable, robust AI systems in safety-critical applications.
Recent work has shown that modern neural networks that possess high predictive capability are poorly calibrated and produce unreliable model predictions.
arXiv Detail & Related papers (2023-08-02T15:28:10Z) - A Close Look into the Calibration of Pre-trained Language Models [56.998539510508515]
Pre-trained language models (PLMs) may fail in giving reliable estimates of their predictive uncertainty.
We study the dynamic change in PLMs' calibration performance in training.
We extend two recently proposed learnable methods that directly collect data to train models to have reasonable confidence estimations.
arXiv Detail & Related papers (2022-10-31T21:31:07Z) - Revisiting Calibration for Question Answering [16.54743762235555]
We argue that the traditional evaluation of calibration does not reflect usefulness of the model confidence.
We propose a new calibration metric, MacroCE, that better captures whether the model assigns low confidence to wrong predictions and high confidence to correct predictions.
arXiv Detail & Related papers (2022-05-25T05:49:56Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Why Calibration Error is Wrong Given Model Uncertainty: Using Posterior
Predictive Checks with Deep Learning [0.0]
We show how calibration error and its variants are almost always incorrect to use given model uncertainty.
We show how this mistake can lead to trust in bad models and mistrust in good models.
arXiv Detail & Related papers (2021-12-02T18:26:30Z) - How Can We Know When Language Models Know? On the Calibration of
Language Models for Question Answering [80.82194311274694]
We examine the question "how can we know when language models know, with confidence, the answer to a particular query?"
We examine three strong generative models -- T5, BART, and GPT-2 -- and study whether their probabilities on QA tasks are well calibrated.
We then examine methods to calibrate such models to make their confidence scores correlate better with the likelihood of correctness.
arXiv Detail & Related papers (2020-12-02T03:53:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.