Confidence Aware Neural Networks for Skin Cancer Detection
- URL: http://arxiv.org/abs/2107.09118v1
- Date: Mon, 19 Jul 2021 19:21:57 GMT
- Title: Confidence Aware Neural Networks for Skin Cancer Detection
- Authors: Donya Khaledyan, AmirReza Tajally, Reza Sarkhosh, Afshar Shamsi,
Hamzeh Asgharnezhad, Abbas Khosravi, Saeid Nahavandi
- Abstract summary: We present three different methods for quantifying uncertainties for skin cancer detection from images.
The obtained results reveal that the predictive uncertainty estimation methods are capable of flagging risky and erroneous predictions.
We also demonstrate that ensemble approaches are more reliable in capturing uncertainties through inference.
- Score: 12.300911283520719
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) models have received particular attention in medical
imaging due to their promising pattern recognition capabilities. However, Deep
Neural Networks (DNNs) require a huge amount of data, and because of the lack
of sufficient data in this field, transfer learning can be a great solution.
DNNs used for disease diagnosis meticulously concentrate on improving the
accuracy of predictions without providing a figure about their confidence of
predictions. Knowing how much a DNN model is confident in a computer-aided
diagnosis model is necessary for gaining clinicians' confidence and trust in
DL-based solutions. To address this issue, this work presents three different
methods for quantifying uncertainties for skin cancer detection from images. It
also comprehensively evaluates and compares performance of these DNNs using
novel uncertainty-related metrics. The obtained results reveal that the
predictive uncertainty estimation methods are capable of flagging risky and
erroneous predictions with a high uncertainty estimate. We also demonstrate
that ensemble approaches are more reliable in capturing uncertainties through
inference.
Related papers
- Trust-informed Decision-Making Through An Uncertainty-Aware Stacked Neural Networks Framework: Case Study in COVID-19 Classification [10.265080819932614]
This study presents an uncertainty-aware stacked neural networks model for the reliable classification of COVID-19 from radiological images.
The model addresses the critical gap in uncertainty-aware modeling by focusing on accurately identifying confidently correct predictions.
The architecture integrates uncertainty quantification methods, including Monte Carlo dropout and ensemble techniques, to enhance predictive reliability.
arXiv Detail & Related papers (2024-09-19T04:20:12Z) - SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Uncertainty-Informed Deep Learning Models Enable High-Confidence
Predictions for Digital Histopathology [40.96261204117952]
We train models to identify lung adenocarcinoma vs. squamous cell carcinoma and show that high-confidence predictions outperform predictions without UQ.
We show that UQ thresholding remains reliable in the setting of domain shift, with accurate high-confidence predictions of adenocarcinoma vs. squamous cell carcinoma for out-of-distribution, non-lung cancer cohorts.
arXiv Detail & Related papers (2022-04-09T17:35:37Z) - Detecting OODs as datapoints with High Uncertainty [12.040347694782007]
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs)
This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis.
Several techniques have been developed to detect inputs where the model's prediction cannot be trusted.
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric)
arXiv Detail & Related papers (2021-08-13T20:07:42Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Objective Evaluation of Deep Uncertainty Predictions for COVID-19
Detection [15.036447340859546]
Deep neural networks (DNNs) have been widely applied for detecting COVID-19 in medical images.
Here we apply and evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray (CXR) images.
arXiv Detail & Related papers (2020-12-22T05:43:42Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.