Online Conformal Selection with Accept-to-Reject Changes
- URL: http://arxiv.org/abs/2508.13838v1
- Date: Tue, 19 Aug 2025 13:58:38 GMT
- Title: Online Conformal Selection with Accept-to-Reject Changes
- Authors: Kangdao Liu, Huajun Xi, Chi-Man Vong, Hongxin Wei,
- Abstract summary: Online Conformal Selection with Accept-to-Reject Changes (dubbed OCS-ARC) is proposed.<n>It incorporates online Benjamini-Hochberg procedure into the candidate selection process.<n>We provide theoretical guarantees that OCS-ARC controls the false discovery rate (FDR) at or below the nominal level at any timestep.
- Score: 14.619101485265322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Selecting a subset of promising candidates from a large pool is crucial across various scientific and real-world applications. Conformal selection offers a distribution-free and model-agnostic framework for candidate selection with uncertainty quantification. While effective in offline settings, its application to online scenarios, where data arrives sequentially, poses challenges. Notably, conformal selection permits the deselection of previously selected candidates, which is incompatible with applications requiring irreversible selection decisions. This limitation is particularly evident in resource-intensive sequential processes, such as drug discovery, where advancing a compound to subsequent stages renders reversal impractical. To address this issue, we extend conformal selection to an online Accept-to-Reject Changes (ARC) procedure: non-selected data points can be reconsidered for selection later, and once a candidate is selected, the decision is irreversible. Specifically, we propose a novel conformal selection method, Online Conformal Selection with Accept-to-Reject Changes (dubbed OCS-ARC), which incorporates online Benjamini-Hochberg procedure into the candidate selection process. We provide theoretical guarantees that OCS-ARC controls the false discovery rate (FDR) at or below the nominal level at any timestep under both i.i.d. and exchangeable data assumptions. Additionally, we theoretically show that our approach naturally extends to multivariate response settings. Extensive experiments on synthetic and real-world datasets demonstrate that OCS-ARC significantly improves selection power over the baseline while maintaining valid FDR control across all examined timesteps.
Related papers
- Online Selective Conformal Prediction with Asymmetric Rules: A Permutation Test Approach [9.317702091531174]
Selective conformal prediction aims to construct prediction sets with valid coverage for a test unit conditional on it being selected by a data-driven mechanism.<n>Existing methods only address a limited collection of selection mechanisms.<n>We propose PErmutation-based Mondrian Conformal Inference (PEMI) for selective conformal prediction with arbitrary asymmetric selection rules.
arXiv Detail & Related papers (2026-02-10T17:39:36Z) - Online selective conformal inference: adaptive scores, convergence rate and optimality [4.7198252163006345]
We introduce an extended version of the point-prediction algorithm, called OnlineSCI, allowing the user to select times where such an inference should be made.<n>OnlineSCI encompasses several prominent online selective tasks, such as building prediction intervals for extreme outcomes, classification with abstention, and online testing.<n>We show that the adaptive versions of OnlineSCI can convergence to an optimal solution and provide an explicit convergence rate in each of the aforementioned application cases.
arXiv Detail & Related papers (2025-08-14T04:36:14Z) - COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees [51.5976496056012]
COIN is an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question.<n>COIN estimates the empirical error rate on a calibration set and applies confidence interval methods to establish a high-probability upper bound on the true error rate.<n>We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data.
arXiv Detail & Related papers (2025-06-25T07:04:49Z) - A Principled Approach to Randomized Selection under Uncertainty: Applications to Peer Review and Grant Funding [68.43987626137512]
We propose a principled framework for randomized decision-making based on interval estimates of the quality of each item.<n>We introduce MERIT, an optimization-based method that maximizes the worst-case expected number of top candidates selected.<n>We prove that MERIT satisfies desirable axiomatic properties not guaranteed by existing approaches.
arXiv Detail & Related papers (2025-06-23T19:59:30Z) - Multivariate Conformal Selection [9.431551477608528]
We propose a generalization of Conformal Selection (CS) to provide rigorous uncertainty quantification.<n>We present two variants: mCS-dist, using distance-based scores, and mCS-learn, which learns optimal scores via differentiable optimization.<n>Experiments on simulated and real-world datasets demonstrate that mCS significantly improves selection power while maintaining False Discovery Rate (FDR) control.
arXiv Detail & Related papers (2025-05-01T23:33:57Z) - Online Selective Conformal Prediction: Errors and Solutions [29.43493007296859]
We evaluate existing calibration selection strategies and pinpoint some fundamental errors in the associated claims.<n>We demonstrate that online selective conformal inference with these strategies guarantees both selection-conditional coverage and FCR control.
arXiv Detail & Related papers (2025-03-21T02:37:28Z) - Online Conformal Probabilistic Numerics via Adaptive Edge-Cloud Offloading [52.499838151272016]
This work introduces a new method to calibrate the HPD sets produced by PLS with the aim of guaranteeing long-term coverage requirements.<n>The proposed method, referred to as online conformal prediction-PLS (OCP-PLS), assumes sporadic feedback from cloud to edge.<n>The validity of OCP-PLS is verified via experiments that bring insights into trade-offs between coverage, prediction set size, and cloud usage.
arXiv Detail & Related papers (2025-03-18T17:30:26Z) - SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities [55.87169702896249]
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.<n>We present a complete and fair evaluation of existing shallow algorithms, including reweighting, mapping, and subspace alignment.<n>Our benchmark highlights the importance of realistic validation and provides practical guidance for real-life applications.
arXiv Detail & Related papers (2024-07-16T12:52:29Z) - Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - CAP: A General Algorithm for Online Selective Conformal Prediction with FCR Control [4.137346786534721]
It is important to control the real-time false coverage-statement rate (FCR) which measures the overall miscoverage level.<n>We develop a general framework named CAP that performs an adaptive pick rule on historical data to construct a calibration set.<n>We prove that CAP can achieve an exact selection-conditional coverage guarantee in the finite-sample and distribution-free regimes.
arXiv Detail & Related papers (2024-03-12T15:07:20Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - Best Arm Identification for Stochastic Rising Bandits [84.55453174601826]
Rising Bandits (SRBs) model sequential decision-making problems in which the expected reward of the available options increases every time they are selected.
This paper focuses on the fixed-budget Best Arm Identification (BAI) problem for SRBs.
We propose two algorithms to tackle the above-mentioned setting, namely R-UCBE and R-SR.
arXiv Detail & Related papers (2023-02-15T08:01:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.