Skeptical inferences in multi-label ranking with sets of probabilities
- URL: http://arxiv.org/abs/2210.08576v1
- Date: Sun, 16 Oct 2022 16:17:56 GMT
- Title: Skeptical inferences in multi-label ranking with sets of probabilities
- Authors: Yonatan Carlos Carranza Alarc\'on, Vu-Linh Nguyen
- Abstract summary: We consider the problem of making skeptical inferences for the multi-label ranking problem.
We seek for skeptical inferences in terms of set-valued predictions consisting of completed rankings.
- Score: 3.883460584034766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider the problem of making skeptical inferences for the
multi-label ranking problem. We assume that our uncertainty is described by a
convex set of probabilities (i.e. a credal set), defined over the set of
labels. Instead of learning a singleton prediction (or, a completed ranking
over the labels), we thus seek for skeptical inferences in terms of set-valued
predictions consisting of completed rankings.
Related papers
- A Conformal Prediction Score that is Robust to Label Noise [13.22445242068721]
We introduce a conformal score that is robust to label noise.
The noise-free conformal score is estimated using the noisy labeled data and the noise level.
We show that our method outperforms current methods by a large margin, in terms of the average size of the prediction set.
arXiv Detail & Related papers (2024-05-04T12:22:02Z) - Bounding Consideration Probabilities in Consider-Then-Choose Ranking
Models [4.968566004977497]
We show that we can learn useful information about consideration probabilities despite not being able to identify them precisely.
We demonstrate our methods on a ranking dataset from a psychology experiment with two different ranking tasks.
arXiv Detail & Related papers (2024-01-19T20:27:29Z) - PAC Prediction Sets Under Label Shift [52.30074177997787]
Prediction sets capture uncertainty by predicting sets of labels rather than individual labels.
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
We evaluate our approach on five datasets.
arXiv Detail & Related papers (2023-10-19T17:57:57Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Skeptical binary inferences in multi-label problems with sets of
probabilities [0.0]
We consider the problem of making distributionally robust, skeptical inferences for the multi-label problem.
By skeptical we understand that we consider as valid only those inferences that are true for every distribution within this set.
We study in particular the Hamming loss case, a common loss function in multi-label problems, showing how skeptical inferences can be made in this setting.
arXiv Detail & Related papers (2022-05-02T05:37:53Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Multi-label Chaining with Imprecise Probabilities [0.0]
We present two different strategies to extend the classical multi-label chaining approach to handle imprecise probability estimates.
The main reasons one could have for using such estimations are (1) to make cautious predictions when a high uncertainty is detected in the chaining and (2) to make better precise predictions by avoiding biases caused in early decisions in the chaining.
Our experimental results on missing labels, which investigate how reliable these predictions are in both approaches, indicate that our approaches produce relevant cautiousness on those hard-to-predict instances where the precise models fail.
arXiv Detail & Related papers (2021-07-15T16:43:31Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Cautious Active Clustering [79.23797234241471]
We consider the problem of classification of points sampled from an unknown probability measure on a Euclidean space.
Our approach is to consider the unknown probability measure as a convex combination of the conditional probabilities for each class.
arXiv Detail & Related papers (2020-08-03T23:47:31Z) - Distribution-free binary classification: prediction sets, confidence
intervals and calibration [106.50279469344937]
We study three notions of uncertainty quantification -- calibration, confidence intervals and prediction sets -- for binary classification in the distribution-free setting.
We derive confidence intervals for binned probabilities for both fixed-width and uniform-mass binning.
As a consequence of our 'tripod' theorems, these confidence intervals for binned probabilities lead to distribution-free calibration.
arXiv Detail & Related papers (2020-06-18T14:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.