Uncertainty Estimation in Medical Image Denoising with Bayesian Deep
Image Prior
- URL: http://arxiv.org/abs/2008.08837v1
- Date: Thu, 20 Aug 2020 08:34:51 GMT
- Title: Uncertainty Estimation in Medical Image Denoising with Bayesian Deep
Image Prior
- Authors: Max-Heinrich Laves and Malte T\"olle and Tobias Ortmaier
- Abstract summary: Uncertainty in inverse medical imaging tasks with deep learning has received little attention.
Deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not present.
We use a randomly convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior.
- Score: 2.0303656145222857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty quantification in inverse medical imaging tasks with deep
learning has received little attention. However, deep models trained on large
data sets tend to hallucinate and create artifacts in the reconstructed output
that are not anatomically present. We use a randomly initialized convolutional
network as parameterization of the reconstructed image and perform gradient
descent to match the observation, which is known as deep image prior. In this
case, the reconstruction does not suffer from hallucinations as no prior
training is performed. We extend this to a Bayesian approach with Monte Carlo
dropout to quantify both aleatoric and epistemic uncertainty. The presented
method is evaluated on the task of denoising different medical imaging
modalities. The experimental results show that our approach yields
well-calibrated uncertainty. That is, the predictive uncertainty correlates
with the predictive error. This allows for reliable uncertainty estimates and
can tackle the problem of hallucinations and artifacts in inverse medical
imaging tasks.
Related papers
- Propagation and Attribution of Uncertainty in Medical Imaging Pipelines [11.65442828043714]
Uncertainty estimation provides a means of building explainable neural networks for medical imaging applications.
We propose a method to propagate uncertainty through cascades of deep learning models in medical imaging pipelines.
arXiv Detail & Related papers (2023-09-28T20:23:25Z) - Image-to-Image Regression with Distribution-Free Uncertainty
Quantification and Applications in Imaging [88.20869695803631]
We show how to derive uncertainty intervals around each pixel that are guaranteed to contain the true value.
We evaluate our procedure on three image-to-image regression tasks.
arXiv Detail & Related papers (2022-02-10T18:59:56Z) - Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers [68.9065881270224]
We introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene.
We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior.
Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks.
arXiv Detail & Related papers (2021-06-25T20:10:00Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Deep Probabilistic Imaging: Uncertainty Quantification and Multi-modal
Solution Characterization for Computational Imaging [11.677576854233394]
We propose a variational deep probabilistic imaging approach to quantify reconstruction uncertainty.
Deep Probabilistic Imaging employs an untrained deep generative model to estimate a posterior distribution of an unobserved image.
arXiv Detail & Related papers (2020-10-27T17:23:09Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Quantifying and Leveraging Predictive Uncertainty for Medical Image
Assessment [13.330243305948278]
We propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure.
We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams.
In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks.
arXiv Detail & Related papers (2020-07-08T16:47:55Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z) - A deep-learning based Bayesian approach to seismic imaging and
uncertainty quantification [0.4588028371034407]
Uncertainty is essential when dealing with ill-conditioned inverse problems.
It is often not possible to formulate a prior distribution that precisely encodes our prior knowledge about the unknown.
We propose to use the functional form of a randomly convolutional neural network as an implicit structured prior.
arXiv Detail & Related papers (2020-01-13T23:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.