PAC Prediction Sets Under Covariate Shift
- URL: http://arxiv.org/abs/2106.09848v1
- Date: Thu, 17 Jun 2021 23:28:42 GMT
- Title: PAC Prediction Sets Under Covariate Shift
- Authors: Sangdon Park and Edgar Dobriban and Insup Lee and Osbert Bastani
- Abstract summary: Uncertainty is important when there are changes to the underlying data distribution.
Most existing uncertainty quantification algorithms break down in the presence of such shifts.
We propose a novel approach that addresses this challenge by constructing emphprobably approximately correct (PAC) prediction sets.
- Score: 40.67733209235852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An important challenge facing modern machine learning is how to rigorously
quantify the uncertainty of model predictions. Conveying uncertainty is
especially important when there are changes to the underlying data distribution
that might invalidate the predictive model. Yet, most existing uncertainty
quantification algorithms break down in the presence of such shifts. We propose
a novel approach that addresses this challenge by constructing \emph{probably
approximately correct (PAC)} prediction sets in the presence of covariate
shift. Our approach focuses on the setting where there is a covariate shift
from the source distribution (where we have labeled training examples) to the
target distribution (for which we want to quantify uncertainty). Our algorithm
assumes given importance weights that encode how the probabilities of the
training examples change under the covariate shift. In practice, importance
weights typically need to be estimated; thus, we extend our algorithm to the
setting where we are given confidence intervals for the importance weights
rather than their true value. We demonstrate the effectiveness of our approach
on various covariate shifts designed based on the DomainNet and ImageNet
datasets.
Related papers
- Learning When the Concept Shifts: Confounding, Invariance, and Dimension Reduction [5.38274042816001]
In observational data, the distribution shift is often driven by unobserved confounding factors.
This motivates us to study the domain adaptation problem with observational data.
We show a model that uses the learned lower-dimensional subspace can incur nearly ideal gap between target and source risk.
arXiv Detail & Related papers (2024-06-22T17:43:08Z) - Provable Adversarial Robustness for Group Equivariant Tasks: Graphs,
Point Clouds, Molecules, and More [9.931513542441612]
We propose a sound notion of adversarial robustness that accounts for task equivariance.
certification methods are, however, unavailable for many models.
We derive the first architecture-specific graph edit distance certificates, i.e. sound robustness guarantees for isomorphism equivariant tasks like node classification.
arXiv Detail & Related papers (2023-12-05T12:09:45Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Calibrated Prediction with Covariate Shift via Unsupervised Domain
Adaptation [25.97333838935589]
Uncertainty estimates are an important tool for helping autonomous agents or human decision makers understand and leverage predictive models.
Existing algorithms can overestimate certainty, possibly yielding a false sense of confidence in the predictive model.
arXiv Detail & Related papers (2020-02-29T20:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.