Empirical Validation of Conformal Prediction for Trustworthy Skin Lesions Classification
- URL: http://arxiv.org/abs/2312.07460v2
- Date: Sat, 16 Mar 2024 00:51:16 GMT
- Title: Empirical Validation of Conformal Prediction for Trustworthy Skin Lesions Classification
- Authors: Jamil Fayyad, Shadi Alijani, Homayoun Najjaran,
- Abstract summary: We develop Conformal Prediction, Monte Carlo Dropout, and Evidential Deep Learning approaches to assess uncertainty quantification in deep neural networks.
Results: The experimental results demonstrate a significant enhancement in uncertainty quantification with the utilization of the Conformal Prediction method.
Our conclusion highlights a robust and consistent performance of conformal prediction across diverse testing conditions.
- Score: 3.7305040207339286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background and objective: Uncertainty quantification is a pivotal field that contributes to realizing reliable and robust systems. It becomes instrumental in fortifying safe decisions by providing complementary information, particularly within high-risk applications. existing studies have explored various methods that often operate under specific assumptions or necessitate substantial modifications to the network architecture to effectively account for uncertainties. The objective of this paper is to study Conformal Prediction, an emerging distribution-free uncertainty quantification technique, and provide a comprehensive understanding of the advantages and limitations inherent in various methods within the medical imaging field. Methods: In this study, we developed Conformal Prediction, Monte Carlo Dropout, and Evidential Deep Learning approaches to assess uncertainty quantification in deep neural networks. The effectiveness of these methods is evaluated using three public medical imaging datasets focused on detecting pigmented skin lesions and blood cell types. Results: The experimental results demonstrate a significant enhancement in uncertainty quantification with the utilization of the Conformal Prediction method, surpassing the performance of the other two methods. Furthermore, the results present insights into the effectiveness of each uncertainty method in handling Out-of-Distribution samples from domain-shifted datasets. Our code is available at: Conclusions: Our conclusion highlights a robust and consistent performance of conformal prediction across diverse testing conditions. This positions it as the preferred choice for decision-making in safety-critical applications.
Related papers
- Decision-Focused Uncertainty Quantification [32.93992587758183]
We develop a framework based on conformal prediction to produce prediction sets that account for a downstream decision loss function.
We present a real-world use case in healthcare diagnosis, where our method effectively incorporates the hierarchical structure of dermatological diseases.
arXiv Detail & Related papers (2024-10-02T17:22:09Z) - Predictive uncertainty estimation in deep learning for lung carcinoma classification in digital pathology under real dataset shifts [2.309018557701645]
This paper evaluates whether predictive uncertainty estimation adds robustness to deep learning-based diagnostic decision-making systems.
We first investigate three popular methods for improving predictive uncertainty: Monte Carlo dropout, deep ensemble, and few-shot learning on lung adenocarcinoma classification as a primary disease in whole slide images.
arXiv Detail & Related papers (2024-08-15T21:49:43Z) - SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Assessing Uncertainty Estimation Methods for 3D Image Segmentation under
Distribution Shifts [0.36832029288386137]
This paper explores the feasibility of using cutting-edge Bayesian and non-Bayesian methods to detect distributionally shifted samples.
We compare three distinct uncertainty estimation methods, each designed to capture either unimodal or multimodal aspects in the posterior distribution.
Our findings demonstrate that methods capable of addressing multimodal characteristics in the posterior distribution, offer more dependable uncertainty estimates.
arXiv Detail & Related papers (2024-02-10T12:23:08Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Conformal Inference of Counterfactuals and Individual Treatment Effects [6.810856082577402]
We propose a conformal inference-based approach that can produce reliable interval estimates for counterfactuals and individual treatment effects.
Existing methods suffer from a significant coverage deficit even in simple models.
arXiv Detail & Related papers (2020-06-11T01:03:32Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.