Improved Online Conformal Prediction via Strongly Adaptive Online
Learning
- URL: http://arxiv.org/abs/2302.07869v1
- Date: Wed, 15 Feb 2023 18:59:30 GMT
- Title: Improved Online Conformal Prediction via Strongly Adaptive Online
Learning
- Authors: Aadyot Bhatnagar, Huan Wang, Caiming Xiong, Yu Bai
- Abstract summary: We develop new online conformal prediction methods that minimize the strongly adaptive regret.
We prove that our methods achieve near-optimal strongly adaptive regret for all interval lengths simultaneously.
Experiments show that our methods consistently obtain better coverage and smaller prediction sets than existing methods on real-world tasks.
- Score: 86.4346936885507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of uncertainty quantification via prediction sets, in an
online setting where the data distribution may vary arbitrarily over time.
Recent work develops online conformal prediction techniques that leverage
regret minimization algorithms from the online learning literature to learn
prediction sets with approximately valid coverage and small regret. However,
standard regret minimization could be insufficient for handling changing
environments, where performance guarantees may be desired not only over the
full time horizon but also in all (sub-)intervals of time. We develop new
online conformal prediction methods that minimize the strongly adaptive regret,
which measures the worst-case regret over all intervals of a fixed length. We
prove that our methods achieve near-optimal strongly adaptive regret for all
interval lengths simultaneously, and approximately valid coverage. Experiments
show that our methods consistently obtain better coverage and smaller
prediction sets than existing methods on real-world tasks, such as time series
forecasting and image classification under distribution shift.
Related papers
- Adaptive Conformal Inference for Multi-Step Ahead Time-Series Forecasting Online [0.0]
We propose an adaptation of the adaptive conformal inference algorithm to achieve finite-sample coverage guarantees.
Our multi-step ahead ACI procedure inherits these guarantees at each prediction step, as well as for the overall error rate.
arXiv Detail & Related papers (2024-09-23T08:07:49Z) - Online Feature Updates Improve Online (Generalized) Label Shift Adaptation [51.328801874640675]
Our novel method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process.
By carefully designing the algorithm, OLS-OFU maintains the similar online regret convergence to the results in the literature while taking the improved features into account.
arXiv Detail & Related papers (2024-02-05T22:03:25Z) - Constrained Online Two-stage Stochastic Optimization: Algorithm with
(and without) Predictions [19.537289123577022]
We consider an online two-stage optimization with long-term constraints over a finite horizon of $T$ periods.
We develop online algorithms for the online two-stage problem from adversarial learning algorithms.
arXiv Detail & Related papers (2024-01-02T07:46:33Z) - A Universal Error Measure for Input Predictions Applied to Online Graph
Problems [57.58926849872494]
We introduce a novel measure for quantifying the error in input predictions.
The measure captures errors due to absent predicted requests as well as unpredicted actual requests.
arXiv Detail & Related papers (2022-05-25T15:24:03Z) - Conformalized Online Learning: Online Calibration Without a Holdout Set [10.420394952839242]
We develop a framework for constructing uncertainty sets with a valid coverage guarantee in an online setting.
We show how to construct valid intervals for a multiple-output regression problem.
arXiv Detail & Related papers (2022-05-18T17:41:37Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Leveraging Predictions in Smoothed Online Convex Optimization via
Gradient-based Algorithms [18.64335888217192]
We consider online convex optimization with time-varying stage costs and additional switching costs.
Since the switching costs introduce coupling across all stages, long-term predictions tend to suffer from lower quality.
We introduce a gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), and analyze its performance by dynamic regrets.
arXiv Detail & Related papers (2020-11-25T06:25:51Z) - Minimizing Dynamic Regret and Adaptive Regret Simultaneously [60.17824125301273]
We propose novel online algorithms that are able to minimize the dynamic regret and adaptive regret simultaneously.
Our theoretical guarantee is even stronger in the sense that one algorithm is able to minimize the dynamic regret over any interval.
arXiv Detail & Related papers (2020-02-06T03:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.