Stable Prediction with Model Misspecification and Agnostic Distribution
Shift
- URL: http://arxiv.org/abs/2001.11713v1
- Date: Fri, 31 Jan 2020 08:56:35 GMT
- Title: Stable Prediction with Model Misspecification and Agnostic Distribution
Shift
- Authors: Kun Kuang, Ruoxuan Xiong, Peng Cui, Susan Athey, Bo Li
- Abstract summary: In machine learning algorithms, two main assumptions are required to guarantee performance.
One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified.
Under model misspecification, distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data.
We propose a novel Decorrelated Weighting Regression (DWR) algorithm which jointly optimize a variable decorrelation regularizer and a weighted regression model.
- Score: 41.26323389341987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For many machine learning algorithms, two main assumptions are required to
guarantee performance. One is that the test data are drawn from the same
distribution as the training data, and the other is that the model is correctly
specified. In real applications, however, we often have little prior knowledge
on the test data and on the underlying true model. Under model
misspecification, agnostic distribution shift between training and test data
leads to inaccuracy of parameter estimation and instability of prediction
across unknown test data. To address these problems, we propose a novel
Decorrelated Weighting Regression (DWR) algorithm which jointly optimizes a
variable decorrelation regularizer and a weighted regression model. The
variable decorrelation regularizer estimates a weight for each sample such that
variables are decorrelated on the weighted training data. Then, these weights
are used in the weighted regression to improve the accuracy of estimation on
the effect of each variable, thus help to improve the stability of prediction
across unknown test data. Extensive experiments clearly demonstrate that our
DWR algorithm can significantly improve the accuracy of parameter estimation
and stability of prediction with model misspecification and agnostic
distribution shift.
Related papers
- Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing [11.291393872745951]
Existing regression models tend to fall short in both accuracy and uncertainty estimation when the label distribution is imbalanced.
We propose a probabilistic deep learning model, dubbed variational imbalanced regression (VIR)
VIR performs well in imbalanced regression but naturally produces reasonable uncertainty estimation as a byproduct.
arXiv Detail & Related papers (2023-06-11T06:27:06Z) - Estimating Model Performance under Domain Shifts with Class-Specific
Confidence Scores [25.162667593654206]
We introduce class-wise calibration within the framework of performance estimation for imbalanced datasets.
We conduct experiments on four tasks and find the proposed modifications consistently improve the estimation accuracy for imbalanced datasets.
arXiv Detail & Related papers (2022-07-20T15:04:32Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Robust Bayesian Inference for Discrete Outcomes with the Total Variation
Distance [5.139874302398955]
Models of discrete-valued outcomes are easily misspecified if the data exhibit zero-inflation, overdispersion or contamination.
Here, we introduce a robust discrepancy-based Bayesian approach using the Total Variation Distance (TVD)
We empirically demonstrate that our approach is robust and significantly improves predictive performance on a range of simulated and real world data.
arXiv Detail & Related papers (2020-10-26T09:53:06Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Stable Prediction via Leveraging Seed Variable [73.9770220107874]
Previous machine learning methods might exploit subtly spurious correlations in training data induced by non-causal variables for prediction.
We propose a conditional independence test based algorithm to separate causal variables with a seed variable as priori, and adopt them for stable prediction.
Our algorithm outperforms state-of-the-art methods for stable prediction.
arXiv Detail & Related papers (2020-06-09T06:56:31Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.