Conformalized Credal Set Predictors
- URL: http://arxiv.org/abs/2402.10723v1
- Date: Fri, 16 Feb 2024 14:30:12 GMT
- Title: Conformalized Credal Set Predictors
- Authors: Alireza Javanmardi, David Stutz, Eyke H\"ullermeier
- Abstract summary: Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution.
We make use of conformal prediction for learning credal set predictors.
We demonstrate the applicability of our method to natural language inference.
- Score: 12.549746646074071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Credal sets are sets of probability distributions that are considered as
candidates for an imprecisely known ground-truth distribution. In machine
learning, they have recently attracted attention as an appealing formalism for
uncertainty representation, in particular due to their ability to represent
both the aleatoric and epistemic uncertainty in a prediction. However, the
design of methods for learning credal set predictors remains a challenging
problem. In this paper, we make use of conformal prediction for this purpose.
More specifically, we propose a method for predicting credal sets in the
classification task, given training data labeled by probability distributions.
Since our method inherits the coverage guarantees of conformal prediction, our
conformal credal sets are guaranteed to be valid with high probability (without
any assumptions on model or distribution). We demonstrate the applicability of
our method to natural language inference, a highly ambiguous natural language
task where it is common to obtain multiple annotations per example.
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Trustworthy Classification through Rank-Based Conformal Prediction Sets [9.559062601251464]
We propose a novel conformal prediction method that employs a rank-based score function suitable for classification models.
Our approach constructs prediction sets that achieve the desired coverage rate while managing their size.
Our contributions include a novel conformal prediction method, theoretical analysis, and empirical evaluation.
arXiv Detail & Related papers (2024-07-05T10:43:41Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Robust Conformal Prediction Using Privileged Information [17.886554223172517]
We develop a method to generate prediction sets with a guaranteed coverage rate that is robust to corruptions in the training data.
Our approach builds on conformal prediction, a powerful framework to construct prediction sets that are valid under the i.i.d assumption.
arXiv Detail & Related papers (2024-06-08T08:56:47Z) - Quantifying Aleatoric and Epistemic Uncertainty with Proper Scoring Rules [19.221081896134567]
Uncertainty representation and quantification are paramount in machine learning.
We propose measures for the quantification of aleatoric and (epistemic) uncertainty based on proper scoring rules.
arXiv Detail & Related papers (2024-04-18T14:20:19Z) - Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Test-time Recalibration of Conformal Predictors Under Distribution Shift
Based on Unlabeled Examples [30.61588337557343]
Conformal predictors provide uncertainty estimates by computing a set of classes with a user-specified probability.
We propose a method that provides excellent uncertainty estimates under natural distribution shifts.
arXiv Detail & Related papers (2022-10-09T04:46:00Z) - Conformal Prediction Sets with Limited False Positives [43.596058175459746]
We develop a new approach to multi-label conformal prediction in which we aim to output a precise set of promising prediction candidates with a bounded number of incorrect answers.
We demonstrate the effectiveness of this approach across a number of classification tasks in natural language processing, computer vision, and computational chemistry.
arXiv Detail & Related papers (2022-02-15T18:52:33Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.