Optimal Conformal Prediction, E-values, Fuzzy Prediction Sets and Subsequent Decisions
- URL: http://arxiv.org/abs/2509.13130v1
- Date: Tue, 16 Sep 2025 14:46:18 GMT
- Title: Optimal Conformal Prediction, E-values, Fuzzy Prediction Sets and Subsequent Decisions
- Authors: Nick W. Koning, Sam van Meer,
- Abstract summary: We make three contributions to conformal prediction.<n>First, we propose fuzzy conformal confidence sets that offer a degree of exclusion.<n>Second, we derive optimal conformal confidence sets.<n>Third, we generalize the inheritance of guarantees by subsequent minimax decisions from confidence sets to fuzzy confidence sets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We make three contributions to conformal prediction. First, we propose fuzzy conformal confidence sets that offer a degree of exclusion, generalizing beyond the binary inclusion/exclusion offered by classical confidence sets. We connect fuzzy confidence sets to e-values to show this degree of exclusion is equivalent to an exclusion at different confidence levels, capturing precisely what e-values bring to conformal prediction. We show that a fuzzy confidence set is a predictive distribution with a more appropriate error guarantee. Second, we derive optimal conformal confidence sets by interpreting the minimization of the expected measure of the confidence set as an optimal testing problem against a particular alternative. We use this to characterize exactly in what sense traditional conformal prediction is optimal. Third, we generalize the inheritance of guarantees by subsequent minimax decisions from confidence sets to fuzzy confidence sets. All our results generalize beyond the exchangeable conformal setting to prediction sets for arbitrary models. In particular, we find that any valid test (e-value) for a hypothesis automatically defines a (fuzzy) prediction confidence set.
Related papers
- Fundamental bounds on efficiency-confidence trade-off for transductive conformal prediction [20.813562569312186]
We demonstrate a fundamental trade-off between confidence and efficiency in transductive methods.<n>We show that any non-trivial confidence level leads to exponential growth in prediction set size for data with inherent uncertainty.
arXiv Detail & Related papers (2025-09-04T19:49:58Z) - COIN: Uncertainty-Guarding Selective Question Answering for Foundation Models with Provable Risk Guarantees [51.5976496056012]
COIN is an uncertainty-guarding selection framework that calibrates statistically valid thresholds to filter a single generated answer per question.<n>COIN estimates the empirical error rate on a calibration set and applies confidence interval methods to establish a high-probability upper bound on the true error rate.<n>We demonstrate COIN's robustness in risk control, strong test-time power in retaining admissible answers, and predictive efficiency under limited calibration data.
arXiv Detail & Related papers (2025-06-25T07:04:49Z) - Optimal Conformal Prediction under Epistemic Uncertainty [61.46247583794497]
Conformal prediction (CP) is a popular framework for representing uncertainty.<n>We introduce Bernoulli prediction sets (BPS) which produce the smallest prediction sets that ensure conditional coverage.<n>When given first-order predictions, BPS reduces to the well-known adaptive prediction sets (APS)
arXiv Detail & Related papers (2025-05-25T08:32:44Z) - SConU: Selective Conformal Uncertainty in Large Language Models [59.25881667640868]
We propose a novel approach termed Selective Conformal Uncertainty (SConU)<n>We develop two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level.<n>Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions.
arXiv Detail & Related papers (2025-04-19T03:01:45Z) - Conformal Prediction Sets with Improved Conditional Coverage using Trust Scores [52.92618442300405]
It is impossible to achieve exact, distribution-free conditional coverage in finite samples.<n>We propose an alternative conformal prediction algorithm that targets coverage where it matters most.
arXiv Detail & Related papers (2025-01-17T12:01:56Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.<n>Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.<n>We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Beyond Uncertainty Quantification: Learning Uncertainty for Trust-Informed Neural Network Decisions - A Case Study in COVID-19 Classification [7.383605511698832]
Reliable uncertainty quantification is critical in high-stakes applications, such as medical diagnosis.<n>Traditional uncertainty quantification methods rely on a predefined confidence threshold to classify predictions as confident or uncertain.<n>This approach assumes that predictions exceeding the threshold are trustworthy, while those below it are uncertain, without explicitly assessing the correctness of high-confidence predictions.<n>This study proposes an uncertainty-aware stacked neural network, which extends conventional uncertainty quantification by learning when predictions should be trusted.
arXiv Detail & Related papers (2024-09-19T04:20:12Z) - Conformalizing Machine Translation Evaluation [9.89901717499058]
Several uncertainty estimation methods have been recently proposed for machine translation evaluation.
We show that the majority of them tend to underestimate model uncertainty, and as a result they often produce misleading confidence intervals that do not cover the ground truth.
We propose as an alternative the use of conformal prediction, a distribution-free method to obtain confidence intervals with a theoretically established guarantee on coverage.
arXiv Detail & Related papers (2023-06-09T19:36:18Z) - Human-Aligned Calibration for AI-Assisted Decision Making [19.767213234234855]
We show that, if the confidence values satisfy a natural alignment property with respect to the decision maker's confidence on her own predictions, there always exists an optimal decision policy.
We show that multicalibration with respect to the decision maker's confidence on her own predictions is a sufficient condition for alignment.
arXiv Detail & Related papers (2023-05-31T18:00:14Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.