Improving accuracy and uncertainty quantification of deep learning based
quantitative MRI using Monte Carlo dropout
- URL: http://arxiv.org/abs/2112.01587v2
- Date: Sun, 5 Nov 2023 11:21:25 GMT
- Title: Improving accuracy and uncertainty quantification of deep learning based
quantitative MRI using Monte Carlo dropout
- Authors: Mehmet Yigit Avci, Ziyu Li, Qiuyun Fan, Susie Huang, Berkin Bilgic,
Qiyuan Tian
- Abstract summary: Dropout is conventionally used during the training phase as regularization method and for quantifying uncertainty in deep learning.
We propose to use dropout during training as well as inference steps, and average multiple predictions to improve the accuracy, while reducing and quantifying the uncertainty.
- Score: 2.290218701603077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dropout is conventionally used during the training phase as regularization
method and for quantifying uncertainty in deep learning. We propose to use
dropout during training as well as inference steps, and average multiple
predictions to improve the accuracy, while reducing and quantifying the
uncertainty. The results are evaluated for fractional anisotropy (FA) and mean
diffusivity (MD) maps which are obtained from only 3 direction scans. With our
method, accuracy can be improved significantly compared to network outputs
without dropout, especially when the training dataset is small. Moreover,
confidence maps are generated which may aid in diagnosis of unseen pathology or
artifacts.
Related papers
- Enhancing Uncertainty Estimation in Semantic Segmentation via Monte-Carlo Frequency Dropout [2.542402342792592]
Monte-Carlo (MC) Dropout provides a practical solution for estimating predictive distributions in deterministic neural networks.
Traditional dropout, applied within the signal space, may fail to account for frequency-related noise common in medical imaging.
A novel approach extends Dropout to the frequency domain, allowing attenuation of signal variations during inference.
arXiv Detail & Related papers (2025-01-20T03:54:30Z) - Rate-In: Information-Driven Adaptive Dropout Rates for Improved Inference-Time Uncertainty Estimation [22.00767497425173]
We propose Rate-In, an algorithm that dynamically adjusts dropout rates during inference by quantifying the information loss induced by dropout in each layer's feature maps.
By quantifying the functional information loss in feature maps, we adaptively tune dropout rates to maintain perceptual quality across diverse medical imaging tasks and architectural configurations.
arXiv Detail & Related papers (2024-12-10T04:03:46Z) - Neural parameter calibration and uncertainty quantification for epidemic
forecasting [0.0]
We apply a novel and powerful computational method to the problem of learning probability densities on contagion parameters.
Using a neural network, we calibrate an ODE model to data of the spread of COVID-19 in Berlin in 2020.
We show convergence of our method to the true posterior on a simplified SIR model of epidemics, and also demonstrate our method's learning capabilities on a reduced dataset.
arXiv Detail & Related papers (2023-12-05T21:34:59Z) - An Uncertainty Aided Framework for Learning based Liver $T_1\rho$
Mapping and Analysis [0.7087237546722617]
We propose a learning-based quantitative MRI system for trustworthy mapping of the liver.
The framework was tested on a dataset of 51 patients with different liver fibrosis stages.
arXiv Detail & Related papers (2023-07-06T02:44:32Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Diffusion Tensor Estimation with Uncertainty Calibration [6.5085381751712506]
We propose a deep learning method to estimate the diffusion tensor and compute the estimation uncertainty.
Data-dependent uncertainty is computed directly by the network and learned via loss attenuation.
We show that the estimation uncertainties computed by the new method can highlight the model's biases, detect domain shift, and reflect the strength of noise in the measurements.
arXiv Detail & Related papers (2021-11-21T15:58:01Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - DAQ: Distribution-Aware Quantization for Deep Image Super-Resolution
Networks [49.191062785007006]
Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs.
Existing works either suffer from a severe performance drop in ultra-low precision of 4 or lower bit-widths, or require a heavy fine-tuning process to recover the performance.
We propose a novel distribution-aware quantization scheme (DAQ) which facilitates accurate training-free quantization in ultra-low precision.
arXiv Detail & Related papers (2020-12-21T10:19:42Z) - Probabilistic 3D surface reconstruction from sparse MRI information [58.14653650521129]
We present a novel probabilistic deep learning approach for concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric uncertainty prediction.
Our method is capable of reconstructing large surface meshes from three quasi-orthogonal MR imaging slices from limited training sets.
arXiv Detail & Related papers (2020-10-05T14:18:52Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.