Training on Test Data with Bayesian Adaptation for Covariate Shift
- URL: http://arxiv.org/abs/2109.12746v1
- Date: Mon, 27 Sep 2021 01:09:08 GMT
- Title: Training on Test Data with Bayesian Adaptation for Covariate Shift
- Authors: Aurick Zhou, Sergey Levine
- Abstract summary: Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
- Score: 96.3250517412545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When faced with distribution shift at test time, deep neural networks often
make inaccurate predictions with unreliable uncertainty estimates. While
improving the robustness of neural networks is one promising approach to
mitigate this issue, an appealing alternate to robustifying networks against
all possible test-time shifts is to instead directly adapt them to unlabeled
inputs from the particular distribution shift we encounter at test time.
However, this poses a challenging question: in the standard Bayesian model for
supervised learning, unlabeled inputs are conditionally independent of model
parameters when the labels are unobserved, so what can unlabeled data tell us
about the model parameters at test-time? In this paper, we derive a Bayesian
model that provides for a well-defined relationship between unlabeled inputs
under distributional shift and model parameters, and show how approximate
inference in this model can be instantiated with a simple regularized entropy
minimization procedure at test-time. We evaluate our method on a variety of
distribution shifts for image classification, including image corruptions,
natural distribution shifts, and domain adaptation settings, and show that our
method improves both accuracy and uncertainty estimation.
Related papers
- AdapTable: Test-Time Adaptation for Tabular Data via Shift-Aware Uncertainty Calibrator and Label Distribution Handler [29.395855812763617]
In this paper, we introduce AdapTable, a novel test-time adaptation method that modifies output probabilities by estimating target label distributions and adjusting initial probabilities based on uncertainty.
Experiments on both natural distribution shifts and synthetic corruptions demonstrate the adaptation efficacy of the proposed method.
arXiv Detail & Related papers (2024-07-15T15:02:53Z) - Rethinking Precision of Pseudo Label: Test-Time Adaptation via
Complementary Learning [10.396596055773012]
We propose a novel complementary learning approach to enhance test-time adaptation.
In test-time adaptation tasks, information from the source domain is typically unavailable.
We highlight that the risk function of complementary labels agrees with their Vanilla loss formula.
arXiv Detail & Related papers (2023-01-15T03:36:33Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.