Conformal Prediction is Robust to Dispersive Label Noise
- URL: http://arxiv.org/abs/2209.14295v2
- Date: Tue, 19 Sep 2023 18:50:28 GMT
- Title: Conformal Prediction is Robust to Dispersive Label Noise
- Authors: Shai Feldman, Bat-Sheva Einbinder, Stephen Bates, Anastasios N.
Angelopoulos, Asaf Gendler, Yaniv Romano
- Abstract summary: We study the robustness of conformal prediction, a powerful tool for uncertainty quantification, to label noise.
Our theory and experiments suggest that conformal prediction and risk-controlling techniques with noisy labels attain conservative risk.
- Score: 26.380955990028294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the robustness of conformal prediction, a powerful tool for
uncertainty quantification, to label noise. Our analysis tackles both
regression and classification problems, characterizing when and how it is
possible to construct uncertainty sets that correctly cover the unobserved
noiseless ground truth labels. We further extend our theory and formulate the
requirements for correctly controlling a general loss function, such as the
false negative proportion, with noisy labels. Our theory and experiments
suggest that conformal prediction and risk-controlling techniques with noisy
labels attain conservative risk over the clean ground truth labels except in
adversarial cases. In such cases, we can also correct for noise of bounded size
in the conformal prediction algorithm in order to ensure achieving the correct
risk of the ground truth labels without score or data regularity.
Related papers
- Robust Yet Efficient Conformal Prediction Sets [53.78604391939934]
Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label.
We derive provably robust sets by bounding the worst-case change in conformity scores.
arXiv Detail & Related papers (2024-07-12T10:59:44Z) - Efficient Online Set-valued Classification with Bandit Feedback [10.882001129426726]
We propose Bandit Class-specific Conformal Prediction (BCCP), offering coverage guarantees on a class-specific granularity.
BCCP overcomes the challenges of sparsely labeled data in each iteration and generalizes the reliability and applicability of conformal prediction to online decision-making environments.
arXiv Detail & Related papers (2024-05-07T15:14:51Z) - A Conformal Prediction Score that is Robust to Label Noise [13.22445242068721]
We introduce a conformal score that is robust to label noise.
The noise-free conformal score is estimated using the noisy labeled data and the noise level.
We show that our method outperforms current methods by a large margin, in terms of the average size of the prediction set.
arXiv Detail & Related papers (2024-05-04T12:22:02Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - A law of adversarial risk, interpolation, and label noise [6.980076213134384]
In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy under many circumstances.
We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the dependence of label noise and adversarial risk in terms of the data distribution.
arXiv Detail & Related papers (2022-07-08T14:34:43Z) - Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning
with Label Noise [6.303101074386922]
Robust Label Refurbishment (Robust LR) is a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels.
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
For example, Robust LR achieves up to 4.5% absolute top-1 accuracy improvement over the previous best on the real-world noisy dataset WebVision.
arXiv Detail & Related papers (2021-12-06T12:10:17Z) - Robustness and reliability when training with noisy labels [12.688634089849023]
Labelling of data for supervised learning can be costly and time-consuming.
Deep neural networks have proved capable of fitting random labels, regularisation and the use of robust loss functions.
arXiv Detail & Related papers (2021-10-07T10:30:20Z) - RATT: Leveraging Unlabeled Data to Guarantee Generalization [96.08979093738024]
We introduce a method that leverages unlabeled data to produce generalization bounds.
We prove that our bound is valid for 0-1 empirical risk minimization.
This work provides practitioners with an option for certifying the generalization of deep nets even when unseen labeled data is unavailable.
arXiv Detail & Related papers (2021-05-01T17:05:29Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.