Label Noise Robustness of Conformal Prediction
- URL: http://arxiv.org/abs/2209.14295v3
- Date: Tue, 26 Nov 2024 07:18:47 GMT
- Title: Label Noise Robustness of Conformal Prediction
- Authors: Bat-Sheva Einbinder, Shai Feldman, Stephen Bates, Anastasios N. Angelopoulos, Asaf Gendler, Yaniv Romano,
- Abstract summary: We study the robustness of conformal prediction, a powerful tool for uncertainty quantification, to label noise.
Our analysis tackles both regression and classification problems.
We extend our theory and formulate the requirements for correctly controlling a general loss function.
- Score: 24.896717715256358
- License:
- Abstract: We study the robustness of conformal prediction, a powerful tool for uncertainty quantification, to label noise. Our analysis tackles both regression and classification problems, characterizing when and how it is possible to construct uncertainty sets that correctly cover the unobserved noiseless ground truth labels. We further extend our theory and formulate the requirements for correctly controlling a general loss function, such as the false negative proportion, with noisy labels. Our theory and experiments suggest that conformal prediction and risk-controlling techniques with noisy labels attain conservative risk over the clean ground truth labels whenever the noise is dispersive and increases variability. In other adversarial cases, we can also correct for noise of bounded size in the conformal prediction algorithm in order to ensure achieving the correct risk of the ground truth labels without score or data regularity.
Related papers
- Robust Online Conformal Prediction under Uniform Label Noise [10.059818934854038]
We investigate the robustness of online conformal prediction under uniform label noise with a known noise rate.
We propose Noise Robust Online Conformal Prediction (dubbed NR-OCP) by updating the threshold with a novel robust pinball loss.
Our theoretical analysis shows that NR-OCP eliminates the coverage gap in both constant and dynamic learning rate schedules.
arXiv Detail & Related papers (2025-01-30T14:08:26Z) - Noise-Adaptive Conformal Classification with Marginal Coverage [53.74125453366155]
We introduce an adaptive conformal inference method capable of efficiently handling deviations from exchangeability caused by random label noise.
We validate our method through extensive numerical experiments demonstrating its effectiveness on synthetic and real data sets.
arXiv Detail & Related papers (2025-01-29T23:55:23Z) - Estimating the Conformal Prediction Threshold from Noisy Labels [22.841631892273547]
We show how we can estimate the noise-free conformal threshold based on the noisy labeled data.
We dub our approach Noise-Aware Conformal Prediction (NACP) and show on several natural and medical image classification datasets.
arXiv Detail & Related papers (2025-01-22T09:35:58Z) - Uncertainty Quantification in Stereo Matching [61.73532883992135]
We propose a new framework for stereo matching and its uncertainty quantification.
We adopt Bayes risk as a measure of uncertainty and estimate data and model uncertainty separately.
We apply our uncertainty method to improve prediction accuracy by selecting data points with small uncertainties.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Robust Yet Efficient Conformal Prediction Sets [53.78604391939934]
Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label.
We derive provably robust sets by bounding the worst-case change in conformity scores.
arXiv Detail & Related papers (2024-07-12T10:59:44Z) - A Conformal Prediction Score that is Robust to Label Noise [13.22445242068721]
We introduce a conformal score that is robust to label noise.
The noise-free conformal score is estimated using the noisy labeled data and the noise level.
We show that our method outperforms current methods by a large margin, in terms of the average size of the prediction set.
arXiv Detail & Related papers (2024-05-04T12:22:02Z) - A law of adversarial risk, interpolation, and label noise [6.980076213134384]
In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy under many circumstances.
We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the dependence of label noise and adversarial risk in terms of the data distribution.
arXiv Detail & Related papers (2022-07-08T14:34:43Z) - Robustness and reliability when training with noisy labels [12.688634089849023]
Labelling of data for supervised learning can be costly and time-consuming.
Deep neural networks have proved capable of fitting random labels, regularisation and the use of robust loss functions.
arXiv Detail & Related papers (2021-10-07T10:30:20Z) - RATT: Leveraging Unlabeled Data to Guarantee Generalization [96.08979093738024]
We introduce a method that leverages unlabeled data to produce generalization bounds.
We prove that our bound is valid for 0-1 empirical risk minimization.
This work provides practitioners with an option for certifying the generalization of deep nets even when unseen labeled data is unavailable.
arXiv Detail & Related papers (2021-05-01T17:05:29Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.