RR-CP: Reliable-Region-Based Conformal Prediction for Trustworthy
Medical Image Classification
- URL: http://arxiv.org/abs/2309.04760v1
- Date: Sat, 9 Sep 2023 11:14:04 GMT
- Title: RR-CP: Reliable-Region-Based Conformal Prediction for Trustworthy
Medical Image Classification
- Authors: Yizhe Zhang, Shuo Wang, Yejia Zhang, Danny Z. Chen
- Abstract summary: Conformal prediction (CP) generates a set of predictions for a given test sample.
The size of the set indicates how certain the predictions are.
We propose a new method called Reliable-Region-Based Conformal Prediction (RR-CP)
- Score: 24.52922162675259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conformal prediction (CP) generates a set of predictions for a given test
sample such that the prediction set almost always contains the true label
(e.g., 99.5\% of the time). CP provides comprehensive predictions on possible
labels of a given test sample, and the size of the set indicates how certain
the predictions are (e.g., a set larger than one is `uncertain'). Such distinct
properties of CP enable effective collaborations between human experts and
medical AI models, allowing efficient intervention and quality check in
clinical decision-making. In this paper, we propose a new method called
Reliable-Region-Based Conformal Prediction (RR-CP), which aims to impose a
stronger statistical guarantee so that the user-specified error rate (e.g.,
0.5\%) can be achieved in the test time, and under this constraint, the size of
the prediction set is optimized (to be small). We consider a small prediction
set size an important measure only when the user-specified error rate is
achieved. Experiments on five public datasets show that our RR-CP performs
well: with a reasonably small-sized prediction set, it achieves the
user-specified error rate (e.g., 0.5\%) significantly more frequently than
exiting CP methods.
Related papers
- Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - On Temperature Scaling and Conformal Prediction of Deep Classifiers [9.975341265604577]
Two popular approaches for that aim are: 1): modifies the classifier's softmax values such that the maximal value better estimates the correctness probability; and 2) Conformal Prediction (CP): produces a prediction set of candidate labels that contains the true label with a user-specified probability.
In practice, both types of indications are desirable, yet, so far the interplay between them has not been investigated.
arXiv Detail & Related papers (2024-02-08T16:45:12Z) - PAC Prediction Sets Under Label Shift [52.30074177997787]
Prediction sets capture uncertainty by predicting sets of labels rather than individual labels.
We propose a novel algorithm for constructing prediction sets with PAC guarantees in the label shift setting.
We evaluate our approach on five datasets.
arXiv Detail & Related papers (2023-10-19T17:57:57Z) - Conformal Prediction for Deep Classifier via Label Ranking [29.784336674173616]
Conformal prediction is a statistical framework that generates prediction sets with a desired coverage guarantee.
We propose a novel algorithm named $textitSorted Adaptive Prediction Sets$ (SAPS)
SAPS discards all the probability values except for the maximum softmax probability.
arXiv Detail & Related papers (2023-10-10T08:54:14Z) - Probabilistically robust conformal prediction [9.401004747930974]
Conformal prediction (CP) is a framework to quantify uncertainty of machine learning classifiers including deep neural networks.
Almost all the existing work on CP assumes clean testing data and there is not much known about the robustness of CP algorithms.
This paper studies the problem of probabilistically robust conformal prediction (PRCP) which ensures robustness to most perturbations.
arXiv Detail & Related papers (2023-07-31T01:32:06Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - A Confidence Machine for Sparse High-Order Interaction Model [16.780058676633914]
Conformal prediction (CP) is a promising approach for obtaining the confidence of prediction results with fewer theoretical assumptions.
We develop a full-CP of sparse high-order interaction model (SHIM) which is sufficiently flexible as it can take into account high-order interactions among variables.
arXiv Detail & Related papers (2022-05-28T03:23:56Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Learning Optimal Conformal Classifiers [32.68483191509137]
Conformal prediction (CP) is used to predict confidence sets containing the true class with a user-specified probability.
This paper explores strategies to differentiate through CP during training with the goal of training model with the conformal wrapper end-to-end.
We show that conformal training (ConfTr) outperforms state-of-the-art CP methods for classification by reducing the average confidence set size.
arXiv Detail & Related papers (2021-10-18T11:25:33Z) - Distribution-Free, Risk-Controlling Prediction Sets [112.9186453405701]
We show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level.
Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.
arXiv Detail & Related papers (2021-01-07T18:59:33Z) - AutoCP: Automated Pipelines for Accurate Prediction Intervals [84.16181066107984]
This paper proposes an AutoML framework called Automatic Machine Learning for Conformal Prediction (AutoCP)
Unlike the familiar AutoML frameworks that attempt to select the best prediction model, AutoCP constructs prediction intervals that achieve the user-specified target coverage rate.
We tested AutoCP on a variety of datasets and found that it significantly outperforms benchmark algorithms.
arXiv Detail & Related papers (2020-06-24T23:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.