Optimal training-conditional regret for online conformal prediction
- URL: http://arxiv.org/abs/2602.16537v1
- Date: Wed, 18 Feb 2026 15:31:15 GMT
- Title: Optimal training-conditional regret for online conformal prediction
- Authors: Jiadong Liang, Zhimei Ren, Yuxin Chen,
- Abstract summary: We study online conformal prediction for non-stationary data streams subject to unknown distribution drift.<n>We specifically focus on independently generated data with two types of distribution shift: abrupt change points and smooth drift.<n>We establish non-asymptotic regret guarantees for our online full conformal algorithm, which match the minimax lower bound under appropriate restrictions on the prediction sets.
- Score: 20.643619398558315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study online conformal prediction for non-stationary data streams subject to unknown distribution drift. While most prior work studied this problem under adversarial settings and/or assessed performance in terms of gaps of time-averaged marginal coverage, we instead evaluate performance through training-conditional cumulative regret. We specifically focus on independently generated data with two types of distribution shift: abrupt change points and smooth drift. When non-conformity score functions are pretrained on an independent dataset, we propose a split-conformal style algorithm that leverages drift detection to adaptively update calibration sets, which provably achieves minimax-optimal regret. When non-conformity scores are instead trained online, we develop a full-conformal style algorithm that again incorporates drift detection to handle non-stationarity; this approach relies on stability - rather than permutation symmetry - of the model-fitting algorithm, which is often better suited to online learning under evolving environments. We establish non-asymptotic regret guarantees for our online full conformal algorithm, which match the minimax lower bound under appropriate restrictions on the prediction sets. Numerical experiments corroborate our theoretical findings.
Related papers
- Online Bayesian Imbalanced Learning with Bregman-Calibrated Deep Networks [0.7106986689736825]
We present textitOnline Bayesian Imbalanced Learning (OBIL), a principled framework that decouples likelihood-ratio estimation from class-prior assumptions.<n>Our approach builds on the established connection between Bregman divergences and proper scoring rules to show that deep networks trained with such losses produce posterior probability estimates.<n>We prove that these likelihood-ratio estimates remain valid under arbitrary changes in class priors and cost structures, requiring only a threshold adjustment for optimal Bayes decisions.
arXiv Detail & Related papers (2026-02-08T21:23:00Z) - Learning Shortest Paths When Data is Scarce [3.3012620893449465]
We study a shortest-path problem in which a planner has access to abundant synthetic samples, limited real-world observations, and an edge-similarity capturing expected behavioral similarity across links.<n>We model the simulator-to-reality discrepancy as an unknown, edge-specific bias that varies smoothly over the similarity graph, and estimate it using Laplacian-regularized least squares.<n>For cold-start settings without initial real data, we develop a bias-aware active learning algorithm that adaptively selects edges to measure until a prescribed accuracy is met.
arXiv Detail & Related papers (2026-01-07T06:19:04Z) - Uncertainty Guided Online Ensemble for Non-stationary Data Streams in Fusion Science [3.6886967341942785]
We present an application of online learning to continuously adapt to drifting data stream for prediction of Toroidal Field coils deflection.<n>Traditional online learning can suffer from short-term performance degradation as ground truth is not available before making predictions.<n>We propose an uncertainty guided online ensemble method to further improve the performance.
arXiv Detail & Related papers (2025-11-03T22:03:37Z) - Backpropagation-Free Test-Time Adaptation via Probabilistic Gaussian Alignment [16.352863226512984]
Test-time adaptation (TTA) enhances the zero-shot robustness under distribution shifts by leveraging unlabeled test data during inference.<n>Most methods rely on backpropagation or iterative optimization, which limits scalability and hinders real-time deployment.<n>We propose ADAPT, an Advanced Distribution-Aware and back propagation-free Test-time adaptation method.
arXiv Detail & Related papers (2025-08-21T13:42:49Z) - Adaptive Conformal Inference by Betting [51.272991377903274]
We consider the problem of adaptive conformal inference without any assumptions about the data generating process.<n>Existing approaches for adaptive conformal inference are based on optimizing the pinball loss using variants of online gradient descent.<n>We propose a different approach for adaptive conformal inference that leverages parameter-free online convex optimization techniques.
arXiv Detail & Related papers (2024-12-26T18:42:08Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Improved Online Conformal Prediction via Strongly Adaptive Online
Learning [86.4346936885507]
We develop new online conformal prediction methods that minimize the strongly adaptive regret.
We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously.
Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks.
arXiv Detail & Related papers (2023-02-15T18:59:30Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Model adaptation and unsupervised learning with non-stationary batch
data under smooth concept drift [8.068725688880772]
Most predictive models assume that training and test data are generated from a stationary process.
We consider the scenario of a gradual concept drift due to the underlying non-stationarity of the data source.
We propose a novel, iterative algorithm for unsupervised adaptation of predictive models.
arXiv Detail & Related papers (2020-02-10T21:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.