Online Selective Conformal Prediction: Errors and Solutions
- URL: http://arxiv.org/abs/2503.16809v1
- Date: Fri, 21 Mar 2025 02:37:28 GMT
- Title: Online Selective Conformal Prediction: Errors and Solutions
- Authors: Yusuf Sale, Aaditya Ramdas,
- Abstract summary: We evaluate existing calibration selection strategies and pinpoint some fundamental errors in the associated claims.<n>We demonstrate that online selective conformal inference with these strategies guarantees both selection-conditional coverage and FCR control.
- Score: 29.43493007296859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In online selective conformal inference, data arrives sequentially, and prediction intervals are constructed only when an online selection rule is met. Since online selections may break the exchangeability between the selected test datum and the rest of the data, one must correct for this by suitably selecting the calibration data. In this paper, we evaluate existing calibration selection strategies and pinpoint some fundamental errors in the associated claims that guarantee selection-conditional coverage and control of the false coverage rate (FCR). To address these shortcomings, we propose novel calibration selection strategies that provably preserve the exchangeability of the calibration data and the selected test datum. Consequently, we demonstrate that online selective conformal inference with these strategies guarantees both selection-conditional coverage and FCR control. Our theoretical findings are supported by experimental evidence examining tradeoffs between valid methods.
Related papers
- Online Conformal Probabilistic Numerics via Adaptive Edge-Cloud Offloading [52.499838151272016]
This work introduces a new method to calibrate the uncertainty sets produced by PLS with the aim of guaranteeing long-term coverage requirements.<n>The proposed method, referred to as online conformal prediction-PLS (OCP-PLS), assumes sporadic feedback from cloud to edge.<n>The validity of OCP-PLS is verified via experiments that bring insights into trade-offs between coverage, prediction set size, and cloud usage.
arXiv Detail & Related papers (2025-03-18T17:30:26Z) - Conformal Uncertainty Indicator for Continual Test-Time Adaptation [16.248749460383227]
We propose a Conformal Uncertainty Indicator (CUI) for Continual Test-Time Adaptation (CTTA)
We leverage Conformal Prediction (CP) to generate prediction sets that include the true label with a specified coverage probability.
Experiments confirm that CUI effectively estimates uncertainty and improves adaptation performance across various existing CTTA methods.
arXiv Detail & Related papers (2025-02-05T08:47:18Z) - Adaptive Conformal Inference by Betting [51.272991377903274]
We consider the problem of adaptive conformal inference without any assumptions about the data generating process.<n>Existing approaches for adaptive conformal inference are based on optimizing the pinball loss using variants of online gradient descent.<n>We propose a different approach for adaptive conformal inference that leverages parameter-free online convex optimization techniques.
arXiv Detail & Related papers (2024-12-26T18:42:08Z) - CAP: A General Algorithm for Online Selective Conformal Prediction with FCR Control [4.137346786534721]
It is important to control the real-time false coverage-statement rate (FCR) which measures the overall miscoverage level.<n>We develop a general framework named CAP that performs an adaptive pick rule on historical data to construct a calibration set.<n>We prove that CAP can achieve an exact selection-conditional coverage guarantee in the finite-sample and distribution-free regimes.
arXiv Detail & Related papers (2024-03-12T15:07:20Z) - Confidence on the Focal: Conformal Prediction with Selection-Conditional Coverage [6.010965256037659]
Conformal prediction builds marginally valid prediction intervals that cover the unknown outcome of a randomly drawn new test point with a prescribed probability.
In such cases, marginally valid conformal prediction intervals may not provide valid coverage for the focal unit(s) due to selection bias.
This paper presents a general framework for constructing a prediction set with finite-sample exact coverage conditional on the unit being selected.
arXiv Detail & Related papers (2024-03-06T17:18:24Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Variance-Reduced Heterogeneous Federated Learning via Stratified Client
Selection [31.401919362978017]
We propose a novel stratified client selection scheme to reduce the variance for the pursuit of better convergence and higher accuracy.
We present an optimized sample size allocation scheme by considering the diversity of stratum's variability.
Experimental results confirm that our approach not only allows for better performance relative to state-of-the-art methods but also is compatible with prevalent FL algorithms.
arXiv Detail & Related papers (2022-01-15T05:41:36Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z) - Structure-Adaptive Sequential Testing for Online False Discovery Rate
Control [1.456699007803424]
This work develops a new class of structure--adaptive sequential testing (SAST) rules for online false discover rate (FDR) control.
A key element in our proposal is a new alpha--investment algorithm that precisely characterizes the gains and losses in sequential decision making.
arXiv Detail & Related papers (2020-02-28T23:16:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.