Conformal Inference for Online Prediction with Arbitrary Distribution
Shifts
- URL: http://arxiv.org/abs/2208.08401v3
- Date: Thu, 5 Oct 2023 22:10:05 GMT
- Title: Conformal Inference for Online Prediction with Arbitrary Distribution
Shifts
- Authors: Isaac Gibbs and Emmanuel Cand\`es
- Abstract summary: We consider the problem of forming prediction sets in an online setting where the distribution generating the data is allowed to vary over time.
We develop a novel procedure with provably small regret over all local time intervals of a given width.
We test our techniques on two real-world datasets aimed at predicting stock market volatility and COVID-19 case counts.
- Score: 1.2277343096128712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of forming prediction sets in an online setting where
the distribution generating the data is allowed to vary over time. Previous
approaches to this problem suffer from over-weighting historical data and thus
may fail to quickly react to the underlying dynamics. Here we correct this
issue and develop a novel procedure with provably small regret over all local
time intervals of a given width. We achieve this by modifying the adaptive
conformal inference (ACI) algorithm of Gibbs and Cand\`{e}s (2021) to contain
an additional step in which the step-size parameter of ACI's gradient descent
update is tuned over time. Crucially, this means that unlike ACI, which
requires knowledge of the rate of change of the data-generating mechanism, our
new procedure is adaptive to both the size and type of the distribution shift.
Our methods are highly flexible and can be used in combination with any
baseline predictive algorithm that produces point estimates or estimated
quantiles of the target without the need for distributional assumptions. We
test our techniques on two real-world datasets aimed at predicting stock market
volatility and COVID-19 case counts and find that they are robust and adaptive
to real-world distribution shifts.
Related papers
- Adaptive Conformal Inference for Multi-Step Ahead Time-Series Forecasting Online [0.0]
We propose an adaptation of the adaptive conformal inference algorithm to achieve finite-sample coverage guarantees.
Our multi-step ahead ACI procedure inherits these guarantees at each prediction step, as well as for the overall error rate.
arXiv Detail & Related papers (2024-09-23T08:07:49Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Distributed Variational Inference for Online Supervised Learning [15.038649101409804]
This paper develops a scalable distributed probabilistic inference algorithm.
It applies to continuous variables, intractable posteriors and large-scale real-time data in sensor networks.
arXiv Detail & Related papers (2023-09-05T22:33:02Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Adapting to Continuous Covariate Shift via Online Density Ratio Estimation [64.8027122329609]
Dealing with distribution shifts is one of the central challenges for modern machine learning.
We propose an online method that can appropriately reuse historical information.
Our density ratio estimation method is proven to perform well by enjoying a dynamic regret bound.
arXiv Detail & Related papers (2023-02-06T04:03:33Z) - Reliable amortized variational inference with physics-based latent
distribution correction [0.4588028371034407]
A neural network is trained to approximate the posterior distribution over existing pairs of model and data.
The accuracy of this approach relies on the availability of high-fidelity training data.
We show that our correction step improves the robustness of amortized variational inference with respect to changes in number of source experiments, noise variance, and shifts in the prior distribution.
arXiv Detail & Related papers (2022-07-24T02:38:54Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Adaptive Conformal Inference Under Distribution Shift [0.0]
We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion.
Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method.
We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.
arXiv Detail & Related papers (2021-06-01T01:37:32Z) - Retrain or not retrain: Conformal test martingales for change-point
detection [0.34635278365524663]
We argue for supplementing the process of training a prediction algorithm by setting up a scheme for detecting the moment when the distribution of the data changes.
Our proposed schemes are based on exchangeability martingales, i.e., processes that are martingales under any exchangeable distribution for the data.
arXiv Detail & Related papers (2021-02-20T20:39:05Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.