Robust Yet Efficient Conformal Prediction Sets
- URL: http://arxiv.org/abs/2407.09165v1
- Date: Fri, 12 Jul 2024 10:59:44 GMT
- Title: Robust Yet Efficient Conformal Prediction Sets
- Authors: Soroush H. Zargarbashi, Mohammad Sadegh Akhondzadeh, Aleksandar Bojchevski,
- Abstract summary: Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label.
We derive provably robust sets by bounding the worst-case change in conformity scores.
- Score: 53.78604391939934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels).
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Conformal Inductive Graph Neural Networks [58.450154976190795]
Conformal prediction (CP) transforms any model's output into prediction sets guaranteed to include (cover) the true label.
CP requires exchangeability, a relaxation of the i.i.d. assumption, to obtain a valid distribution-free coverage guarantee.
conventional CP cannot be applied in inductive settings due to the implicit shift in the (calibration) scores caused by message passing with the new nodes.
We prove that the guarantee holds independently of the prediction time, e.g. upon arrival of a new node/edge or at any subsequent moment.
arXiv Detail & Related papers (2024-07-12T11:12:49Z) - Verifiably Robust Conformal Prediction [1.391198481393699]
This paper introduces VRCP (Verifiably Robust Conformal Prediction), a new framework that leverages neural network verification methods to recover coverage guarantees under adversarial attacks.
Our method is the first to support perturbations bounded by arbitrary norms including $ell1$, $ell2$, and $ellinfty$, as well as regression tasks.
In every case, VRCP achieves above nominal coverage and yields significantly more efficient and informative prediction regions than the SotA.
arXiv Detail & Related papers (2024-05-29T09:50:43Z) - Provably Robust Conformal Prediction with Improved Efficiency [29.70455766394585]
Conformal prediction is a powerful tool to generate uncertainty sets with guaranteed coverage.
adversarial examples are able to manipulate conformal methods to construct prediction sets with invalid coverage rates.
We propose two novel methods, Post-Training Transformation (PTT) and Robust Conformal Training (RCT), to effectively reduce prediction set size with little overhead.
arXiv Detail & Related papers (2024-04-30T15:49:01Z) - Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - PAC Prediction Sets Under Label Shift [52.30074177997787]
Prediction sets capture uncertainty by predicting sets of labels rather than individual labels.
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
We evaluate our approach on five datasets.
arXiv Detail & Related papers (2023-10-19T17:57:57Z) - Practical Adversarial Multivalid Conformal Prediction [27.179891682629183]
We give a generic conformal prediction method for sequential prediction.
It achieves target empirical coverage guarantees against adversarially chosen data.
It is computationally lightweight -- comparable to split conformal prediction.
arXiv Detail & Related papers (2022-06-02T14:33:00Z) - Conformal Prediction Sets with Limited False Positives [43.596058175459746]
We develop a new approach to multi-label conformal prediction in which we aim to output a precise set of promising prediction candidates with a bounded number of incorrect answers.
We demonstrate the effectiveness of this approach across a number of classification tasks in natural language processing, computer vision, and computational chemistry.
arXiv Detail & Related papers (2022-02-15T18:52:33Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.