Adaptive conformal classification with noisy labels
- URL: http://arxiv.org/abs/2309.05092v2
- Date: Thu, 22 Feb 2024 01:33:11 GMT
- Title: Adaptive conformal classification with noisy labels
- Authors: Matteo Sesia, Y. X. Rachel Wang, Xin Tong
- Abstract summary: The paper develops novel conformal prediction methods for classification tasks that can automatically adapt to random label contamination in the calibration sample.
This is made possible by a precise characterization of the effective coverage inflation suffered by standard conformal inferences in the presence of label contamination.
The advantages of the proposed methods are demonstrated through extensive simulations and an application to object classification with the CIFAR-10H image data set.
- Score: 22.33857704379073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper develops novel conformal prediction methods for classification
tasks that can automatically adapt to random label contamination in the
calibration sample, leading to more informative prediction sets with stronger
coverage guarantees compared to state-of-the-art approaches. This is made
possible by a precise characterization of the effective coverage inflation (or
deflation) suffered by standard conformal inferences in the presence of label
contamination, which is then made actionable through new calibration
algorithms. Our solution is flexible and can leverage different modeling
assumptions about the label contamination process, while requiring no knowledge
of the underlying data distribution or of the inner workings of the
machine-learning classifier. The advantages of the proposed methods are
demonstrated through extensive simulations and an application to object
classification with the CIFAR-10H image data set.
Related papers
- Sparse Activations as Conformal Predictors [19.298282860984116]
We find a novel connection between conformal prediction and sparse softmax-like transformations.
We introduce new non-conformity scores for classification that make the calibration process correspond to the widely used temperature scaling method.
We show that the proposed method achieves competitive results in terms of coverage, efficiency, and adaptiveness.
arXiv Detail & Related papers (2025-02-20T17:53:41Z) - Noise-Adaptive Conformal Classification with Marginal Coverage [53.74125453366155]
We introduce an adaptive conformal inference method capable of efficiently handling deviations from exchangeability caused by random label noise.
We validate our method through extensive numerical experiments demonstrating its effectiveness on synthetic and real data sets.
arXiv Detail & Related papers (2025-01-29T23:55:23Z) - Estimating the Conformal Prediction Threshold from Noisy Labels [22.841631892273547]
We show how we can estimate the noise-free conformal threshold based on the noisy labeled data.
We dub our approach Noise-Aware Conformal Prediction (NACP) and show on several natural and medical image classification datasets.
arXiv Detail & Related papers (2025-01-22T09:35:58Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Approximate Conditional Coverage via Neural Model Approximations [0.030458514384586396]
We analyze a data-driven procedure for obtaining empirically reliable approximate conditional coverage.
We demonstrate the potential for substantial (and otherwise unknowable) under-coverage with split-conformal alternatives with marginal coverage guarantees.
arXiv Detail & Related papers (2022-05-28T02:59:05Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.