Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision
- URL: http://arxiv.org/abs/2304.06158v3
- Date: Fri, 30 Jun 2023 21:25:28 GMT
- Title: Post-selection Inference for Conformal Prediction: Trading off Coverage
for Precision
- Authors: Siddhaarth Sarkar, Arun Kumar Kuchibhotla
- Abstract summary: Traditionally, conformal prediction inference requires a data-independent specification of miscoverage level.
We develop simultaneous conformal inference to account for data-dependent miscoverage levels.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal inference has played a pivotal role in providing uncertainty
quantification for black-box ML prediction algorithms with finite sample
guarantees. Traditionally, conformal prediction inference requires a
data-independent specification of miscoverage level. In practical applications,
one might want to update the miscoverage level after computing the prediction
set. For example, in the context of binary classification, the analyst might
start with a 95$\%$ prediction sets and see that most prediction sets contain
all outcome classes. Prediction sets with both classes being undesirable, the
analyst might desire to consider, say 80$\%$ prediction set. Construction of
prediction sets that guarantee coverage with data-dependent miscoverage level
can be considered as a post-selection inference problem. In this work, we
develop simultaneous conformal inference to account for data-dependent
miscoverage levels. Under the assumption of independent and identically
distributed observations, our proposed methods have a finite sample
simultaneous guarantee over all miscoverage levels. This allows practitioners
to trade freely coverage probability for the quality of the prediction set by
any criterion of their choice (say size of prediction set) while maintaining
the finite sample guarantees similar to traditional conformal inference.
Related papers
- Bin-Conditional Conformal Prediction of Fatalities from Armed Conflict [0.5312303275762104]
We introduce a novel extension to conformal prediction algorithm which we call bin-conditional conformal prediction.
This method allows users to obtain individual-level prediction intervals for any arbitrary prediction model.
We apply the bin-conditional conformal prediction algorithm to forecast fatalities from armed conflict.
arXiv Detail & Related papers (2024-10-18T14:41:42Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Equal Opportunity of Coverage in Fair Regression [50.76908018786335]
We study fair machine learning (ML) under predictive uncertainty to enable reliable and trustworthy decision-making.
We propose Equal Opportunity of Coverage (EOC) that aims to achieve two properties: (1) coverage rates for different groups with similar outcomes are close, and (2) the coverage rate for the entire population remains at a predetermined level.
arXiv Detail & Related papers (2023-11-03T21:19:59Z) - PAC Prediction Sets Under Label Shift [52.30074177997787]
Prediction sets capture uncertainty by predicting sets of labels rather than individual labels.
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
We evaluate our approach on five datasets.
arXiv Detail & Related papers (2023-10-19T17:57:57Z) - Conformal Prediction for Deep Classifier via Label Ranking [29.784336674173616]
Conformal prediction is a statistical framework that generates prediction sets with a desired coverage guarantee.
We propose a novel algorithm named $textitSorted Adaptive Prediction Sets$ (SAPS)
SAPS discards all the probability values except for the maximum softmax probability.
arXiv Detail & Related papers (2023-10-10T08:54:14Z) - Distribution-Free Finite-Sample Guarantees and Split Conformal
Prediction [0.0]
split conformal prediction represents a promising avenue to obtain finite-sample guarantees under minimal distribution-free assumptions.
We highlight the connection between split conformal prediction and classical tolerance predictors developed in the 1940s.
arXiv Detail & Related papers (2022-10-26T14:12:24Z) - Predictive Inference with Feature Conformal Prediction [80.77443423828315]
We propose feature conformal prediction, which extends the scope of conformal prediction to semantic feature spaces.
From a theoretical perspective, we demonstrate that feature conformal prediction provably outperforms regular conformal prediction under mild assumptions.
Our approach could be combined with not only vanilla conformal prediction, but also other adaptive conformal prediction methods.
arXiv Detail & Related papers (2022-10-01T02:57:37Z) - Practical Adversarial Multivalid Conformal Prediction [27.179891682629183]
We give a generic conformal prediction method for sequential prediction.
It achieves target empirical coverage guarantees against adversarially chosen data.
It is computationally lightweight -- comparable to split conformal prediction.
arXiv Detail & Related papers (2022-06-02T14:33:00Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.