Validating uncertainty in medical image translation
- URL: http://arxiv.org/abs/2002.04639v1
- Date: Tue, 11 Feb 2020 19:06:54 GMT
- Title: Validating uncertainty in medical image translation
- Authors: Jacob C. Reinhold, Yufan He, Shizhong Han, Yunqiang Chen, Dashan Gao,
Junghoon Lee, Jerry L. Prince, Aaron Carass
- Abstract summary: We investigate using dropout to estimate uncertainty in a CT-to-MR image translation task.
We show that both types of uncertainty are captured, as defined, providing confidence in the output uncertainty estimates.
- Score: 7.565565370757736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical images are increasingly used as input to deep neural networks to
produce quantitative values that aid researchers and clinicians. However,
standard deep neural networks do not provide a reliable measure of uncertainty
in those quantitative values. Recent work has shown that using dropout during
training and testing can provide estimates of uncertainty. In this work, we
investigate using dropout to estimate epistemic and aleatoric uncertainty in a
CT-to-MR image translation task. We show that both types of uncertainty are
captured, as defined, providing confidence in the output uncertainty estimates.
Related papers
- One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Disentangled Uncertainty and Out of Distribution Detection in Medical
Generative Models [7.6146285961466]
We study disentangled uncertainties in image to image translation tasks in the medical domain.
We use CycleGAN to convert T1-weighted brain MRI scans to T2-weighted brain MRI scans.
arXiv Detail & Related papers (2022-11-11T14:45:16Z) - Beyond Voxel Prediction Uncertainty: Identifying brain lesions you can
trust [1.1199585259018459]
Deep neural networks have become the gold-standard approach for the automated segmentation of 3D medical images.
In this work, we propose to go beyond voxel-wise assessment using an innovative Graph Neural Network approach.
This network allows the fusion of three estimators of voxel uncertainty: entropy, variance, and model's confidence.
arXiv Detail & Related papers (2022-09-22T09:20:05Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Confidence Aware Neural Networks for Skin Cancer Detection [12.300911283520719]
We present three different methods for quantifying uncertainties for skin cancer detection from images.
The obtained results reveal that the predictive uncertainty estimation methods are capable of flagging risky and erroneous predictions.
We also demonstrate that ensemble approaches are more reliable in capturing uncertainties through inference.
arXiv Detail & Related papers (2021-07-19T19:21:57Z) - Understanding Softmax Confidence and Uncertainty [95.71801498763216]
It is often remarked that neural networks fail to increase their uncertainty when predicting on data far from the training distribution.
Yet naively using softmax confidence as a proxy for uncertainty achieves modest success in tasks exclusively testing for this.
This paper investigates this contradiction, identifying two implicit biases that do encourage softmax confidence to correlate with uncertainty.
arXiv Detail & Related papers (2021-06-09T10:37:29Z) - Objective Evaluation of Deep Uncertainty Predictions for COVID-19
Detection [15.036447340859546]
Deep neural networks (DNNs) have been widely applied for detecting COVID-19 in medical images.
Here we apply and evaluate three uncertainty quantification techniques for COVID-19 detection using chest X-Ray (CXR) images.
arXiv Detail & Related papers (2020-12-22T05:43:42Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Uncertainty Estimation in Medical Image Denoising with Bayesian Deep
Image Prior [2.0303656145222857]
Uncertainty in inverse medical imaging tasks with deep learning has received little attention.
Deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not present.
We use a randomly convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior.
arXiv Detail & Related papers (2020-08-20T08:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.