Useful Confidence Measures: Beyond the Max Score
- URL: http://arxiv.org/abs/2210.14070v1
- Date: Tue, 25 Oct 2022 14:54:44 GMT
- Title: Useful Confidence Measures: Beyond the Max Score
- Authors: Gal Yona and Amir Feder and Itay Laish
- Abstract summary: We derive several confidence measures that depend on information beyond the maximum score.
We show that when models are evaluated on the out-of-distribution data out of the box'', using only the maximum score to inform the confidence measure is highly suboptimal.
- Score: 9.189382034558657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important component in deploying machine learning (ML) in safety-critic
applications is having a reliable measure of confidence in the ML model's
predictions. For a classifier $f$ producing a probability vector $f(x)$ over
the candidate classes, the confidence is typically taken to be $\max_i f(x)_i$.
This approach is potentially limited, as it disregards the rest of the
probability vector. In this work, we derive several confidence measures that
depend on information beyond the maximum score, such as margin-based and
entropy-based measures, and empirically evaluate their usefulness, focusing on
NLP tasks with distribution shifts and Transformer-based models. We show that
when models are evaluated on the out-of-distribution data ``out of the box'',
using only the maximum score to inform the confidence measure is highly
suboptimal. In the post-processing regime (where the scores of $f$ can be
improved using additional in-distribution held-out data), this remains true,
albeit less significant. Overall, our results suggest that entropy-based
confidence is a surprisingly useful measure.
Related papers
- Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Learning Confidence for Transformer-based Neural Machine Translation [38.679505127679846]
We propose an unsupervised confidence estimate learning jointly with the training of the neural machine translation (NMT) model.
We explain confidence as how many hints the NMT model needs to make a correct prediction, and more hints indicate low confidence.
We demonstrate that our learned confidence estimate achieves high accuracy on extensive sentence/word-level quality estimation tasks.
arXiv Detail & Related papers (2022-03-22T01:51:58Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - MACEst: The reliable and trustworthy Model Agnostic Confidence Estimator [0.17188280334580192]
We argue that any confidence estimates based upon standard machine learning point prediction algorithms are fundamentally flawed.
We present MACEst, a Model Agnostic Confidence Estimator, which provides reliable and trustworthy confidence estimates.
arXiv Detail & Related papers (2021-09-02T14:34:06Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - CoinDICE: Off-Policy Confidence Interval Estimation [107.86876722777535]
We study high-confidence behavior-agnostic off-policy evaluation in reinforcement learning.
We show in a variety of benchmarks that the confidence interval estimates are tighter and more accurate than existing methods.
arXiv Detail & Related papers (2020-10-22T12:39:11Z) - Certifying Confidence via Randomized Smoothing [151.67113334248464]
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
Most smoothing methods do not give us any information about the confidence with which the underlying classifier makes a prediction.
We propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
arXiv Detail & Related papers (2020-09-17T04:37:26Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Optimal Confidence Regions for the Multinomial Parameter [15.851891538566585]
Construction of tight confidence regions and intervals is central to statistical inference and decision making.
This paper develops new theory showing minimum average volume confidence regions for categorical data.
arXiv Detail & Related papers (2020-02-03T23:00:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.