Joint Dermatological Lesion Classification and Confidence Modeling with
Uncertainty Estimation
- URL: http://arxiv.org/abs/2107.08770v1
- Date: Mon, 19 Jul 2021 11:54:37 GMT
- Title: Joint Dermatological Lesion Classification and Confidence Modeling with
Uncertainty Estimation
- Authors: Gun-Hee Lee, Han-Bin Ko, Seong-Whan Lee
- Abstract summary: We propose an overall framework that jointly considers dermatological classification and uncertainty estimation together.
The estimated confidence of each feature to avoid uncertain feature and undesirable shift is pooled from confidence network.
We demonstrate the potential of the proposed approach in two state-of-the-art dermoscopic datasets.
- Score: 23.817227116949958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has played a major role in the interpretation of dermoscopic
images for detecting skin defects and abnormalities. However, current deep
learning solutions for dermatological lesion analysis are typically limited in
providing probabilistic predictions which highlights the importance of
concerning uncertainties. This concept of uncertainty can provide a confidence
level for each feature which prevents overconfident predictions with poor
generalization on unseen data. In this paper, we propose an overall framework
that jointly considers dermatological classification and uncertainty estimation
together. The estimated confidence of each feature to avoid uncertain feature
and undesirable shift, which are caused by environmental difference of input
image, in the latent space is pooled from confidence network. Our qualitative
results show that modeling uncertainties not only helps to quantify model
confidence for each prediction but also helps classification layers to focus on
confident features, therefore, improving the accuracy for dermatological lesion
classification. We demonstrate the potential of the proposed approach in two
state-of-the-art dermoscopic datasets (ISIC 2018 and ISIC 2019).
Related papers
- Interpretability of Uncertainty: Exploring Cortical Lesion Segmentation in Multiple Sclerosis [33.91263917157504]
Uncertainty quantification (UQ) has become critical for evaluating the reliability of artificial intelligence systems.
This study addresses the interpretability of instance-wise uncertainty values in deep learning models for focal lesion segmentation in magnetic resonance imaging.
arXiv Detail & Related papers (2024-07-08T09:13:30Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can
trust [1.1199585259018459]
Deep neural networks have become the gold-standard approach for the automated segmentation of 3D medical images.
In this work, we propose to go beyond voxel-wise assessment using an innovative Graph Neural Network approach.
This network allows the fusion of three estimators of voxel uncertainty: entropy, variance, and model's confidence.
arXiv Detail & Related papers (2022-09-22T09:20:05Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Uncertainty-Informed Deep Learning Models Enable High-Confidence
Predictions for Digital Histopathology [40.96261204117952]
We train models to identify lung adenocarcinoma vs. squamous cell carcinoma and show that high-confidence predictions outperform predictions without UQ.
We show that UQ thresholding remains reliable in the setting of domain shift, with accurate high-confidence predictions of adenocarcinoma vs. squamous cell carcinoma for out-of-distribution, non-lung cancer cohorts.
arXiv Detail & Related papers (2022-04-09T17:35:37Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Integrating uncertainty in deep neural networks for MRI based stroke
analysis [0.0]
We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images.
In a cohort of 511 patients, our CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart.
arXiv Detail & Related papers (2020-08-13T09:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.