Anytime-Valid Conformal Risk Control
- URL: http://arxiv.org/abs/2602.04364v1
- Date: Wed, 04 Feb 2026 09:39:36 GMT
- Title: Anytime-Valid Conformal Risk Control
- Authors: Bror Hultberg, Dave Zachariah, Antônio H. Ribeiro,
- Abstract summary: conformal prediction and risk control can produce prediction sets that exhibit statistically valid error control in a computationally efficient manner.<n>We extend the control to remain valid with high probability over a cumulatively growing calibration dataset at any time point.
- Score: 9.475553038511336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prediction sets provide a means of quantifying the uncertainty in predictive tasks. Using held out calibration data, conformal prediction and risk control can produce prediction sets that exhibit statistically valid error control in a computationally efficient manner. However, in the standard formulations, the error is only controlled on average over many possible calibration datasets of fixed size. In this paper, we extend the control to remain valid with high probability over a cumulatively growing calibration dataset at any time point. We derive such guarantees using quantile-based arguments and illustrate the applicability of the proposed framework to settings involving distribution shift. We further establish a matching lower bound and show that our guarantees are asymptotically tight. Finally, we demonstrate the practical performance of our methods through both simulations and real-world numerical examples.
Related papers
- Non-exchangeable Conformal Prediction with Optimal Transport: Tackling Distribution Shifts with Unlabeled Data [13.788274786492286]
Conformal prediction is a distribution-free uncertainty quantification method that has gained popularity in the machine learning community.<n>Its most common variant, dubbed split conformal prediction, is also computationally efficient as it boils down to collecting statistics of the model predictions on some calibration data not yet seen by the model.<n>We show that it is possible to estimate the loss in coverage and mitigate arbitrary distribution shifts, offering a principled and broadly applicable solution.
arXiv Detail & Related papers (2025-07-14T16:10:55Z) - COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees [51.5976496056012]
COIN is an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question.<n>COIN estimates the empirical error rate on a calibration set and applies confidence interval methods to establish a high-probability upper bound on the true error rate.<n>We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data.
arXiv Detail & Related papers (2025-06-25T07:04:49Z) - When Can We Reuse a Calibration Set for Multiple Conformal Predictions? [0.0]
We show how e-conformal prediction, in conjunction with Hoeffding's inequality, can enable the repeated use of a single calibration set.<n>We train a deep neural network and utilise a calibration set to estimate a Hoeffding correction.<n>This correction allows us to apply a modified Markov's inequality, leading to the construction of prediction sets with quantifiable confidence.
arXiv Detail & Related papers (2025-06-24T14:57:25Z) - Conformal Prediction Adaptive to Unknown Subpopulation Shifts [16.20577166571595]
Conformal prediction is widely used to equip black-box machine learning models with uncertainty quantification.<n>In this work, we focus on unknown subpopulation shifts where we are not given group-information.<n>We propose new methods that provably adapt conformal prediction to such shifts, ensuring valid coverage without explicit knowledge of subpopulation structure.
arXiv Detail & Related papers (2025-06-05T20:58:39Z) - Robust Conformal Outlier Detection under Contaminated Reference Data [20.864605211132663]
Conformal prediction is a flexible framework for calibrating machine learning predictions.<n>In outlier detection, this calibration relies on a reference set of labeled inlier data to control the type-I error rate.<n>This paper analyzes the impact of contamination on the validity of conformal methods.
arXiv Detail & Related papers (2025-02-07T10:23:25Z) - Noise-Adaptive Conformal Classification with Marginal Coverage [53.74125453366155]
We introduce an adaptive conformal inference method capable of efficiently handling deviations from exchangeability caused by random label noise.<n>We validate our method through extensive numerical experiments demonstrating its effectiveness on synthetic and real data sets.
arXiv Detail & Related papers (2025-01-29T23:55:23Z) - Semi-Supervised Risk Control via Prediction-Powered Inference [14.890609936348277]
Risk-controlling prediction sets (RCPS) is a tool for transforming the output of any machine learning model to design a predictive rule with rigorous error rate control.<n>We introduce a semi-supervised calibration procedure that leverages unlabeled data to rigorously tune the hyper- parameter.<n>Our procedure builds upon the prediction-powered inference framework, carefully tailoring it to risk-controlling tasks.
arXiv Detail & Related papers (2024-12-15T13:00:23Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.<n>Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.<n>We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Unsupervised Calibration under Covariate Shift [92.02278658443166]
We introduce the problem of calibration under domain shift and propose an importance sampling based approach to address it.
We evaluate and discuss the efficacy of our method on both real-world datasets and synthetic datasets.
arXiv Detail & Related papers (2020-06-29T21:50:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.