Leveraging Gradients for Unsupervised Accuracy Estimation under
Distribution Shift
- URL: http://arxiv.org/abs/2401.08909v2
- Date: Fri, 1 Mar 2024 10:21:42 GMT
- Title: Leveraging Gradients for Unsupervised Accuracy Estimation under
Distribution Shift
- Authors: Renchunzi Xie, Ambroise Odonnat, Vasilii Feofanov, Ievgen Redko,
Jianfeng Zhang, Bo An
- Abstract summary: Estimating test accuracy without access to the ground-truth test labels under varying test environments is a challenging, yet extremely important problem.
We use the norm of classification-layer gradients, backpropagated from the cross-entropy loss after only one step over test data.
Our key idea is that the model should be adjusted with a higher magnitude of gradients when it does not generalize to the test dataset with a distribution shift.
- Score: 25.951051758560702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating test accuracy without access to the ground-truth test labels under
varying test environments is a challenging, yet extremely important problem in
the safe deployment of machine learning algorithms. Existing works rely on the
information from either the outputs or the extracted features of neural
networks to formulate an estimation score correlating with the ground-truth
test accuracy. In this paper, we investigate--both empirically and
theoretically--how the information provided by the gradients can be predictive
of the ground-truth test accuracy even under a distribution shift.
Specifically, we use the norm of classification-layer gradients, backpropagated
from the cross-entropy loss after only one gradient step over test data. Our
key idea is that the model should be adjusted with a higher magnitude of
gradients when it does not generalize to the test dataset with a distribution
shift. We provide theoretical insights highlighting the main ingredients of
such an approach ensuring its empirical success. Extensive experiments
conducted on diverse distribution shifts and model structures demonstrate that
our method significantly outperforms state-of-the-art algorithms.
Related papers
- Adapting Conformal Prediction to Distribution Shifts Without Labels [16.478151550456804]
Conformal prediction (CP) enables machine learning models to output prediction sets with guaranteed coverage rate.
Our goal is to improve the quality of CP-generated prediction sets using only unlabeled data from the test domain.
This is achieved by two new methods called ECP and EACP, that adjust the score function in CP according to the base model's uncertainty on the unlabeled test data.
arXiv Detail & Related papers (2024-06-03T15:16:02Z) - MANO: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts [25.643876327918544]
Current logit-based methods are vulnerable to overconfidence issues, leading to prediction bias, especially under the natural shift.
We propose MaNo, which applies a data-dependent normalization on the logits to reduce prediction bias, and takes the $L_p$ norm of the matrix of normalized logits as the estimation score.
MaNo achieves state-of-the-art performance across various architectures in the presence of synthetic, natural, or subpopulation shifts.
arXiv Detail & Related papers (2024-05-29T10:45:06Z) - Robust Calibration with Multi-domain Temperature Scaling [86.07299013396059]
We develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains.
Our proposed method -- multi-domain temperature scaling -- uses the robustness in the domains to improve calibration under distribution shift.
arXiv Detail & Related papers (2022-06-06T17:32:12Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Detecting Errors and Estimating Accuracy on Unlabeled Data with
Self-training Ensembles [38.23896575179384]
We propose a principled and practically effective framework that simultaneously addresses the two tasks.
One instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7%.
On iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7%.
arXiv Detail & Related papers (2021-06-29T21:32:51Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.