Rician likelihood loss for quantitative MRI using self-supervised deep
learning
- URL: http://arxiv.org/abs/2307.07072v1
- Date: Thu, 13 Jul 2023 21:42:26 GMT
- Title: Rician likelihood loss for quantitative MRI using self-supervised deep
learning
- Authors: Christopher S. Parker, Anna Schroder, Sean C. Epstein, James Cole,
Daniel C. Alexander, Hui Zhang
- Abstract summary: Previous quantitative MR imaging studies using self-supervised deep learning have reported biased parameter estimates at low SNR.
We introduce the negative log Rician likelihood (NLR) loss, which is numerically stable and accurate across the full range of tested SNRs.
We expect the development to benefit quantitative MR imaging techniques broadly, enabling more accurate estimation from noisy data.
- Score: 4.937920705275674
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Purpose: Previous quantitative MR imaging studies using self-supervised deep
learning have reported biased parameter estimates at low SNR. Such systematic
errors arise from the choice of Mean Squared Error (MSE) loss function for
network training, which is incompatible with Rician-distributed MR magnitude
signals. To address this issue, we introduce the negative log Rician likelihood
(NLR) loss. Methods: A numerically stable and accurate implementation of the
NLR loss was developed to estimate quantitative parameters of the apparent
diffusion coefficient (ADC) model and intra-voxel incoherent motion (IVIM)
model. Parameter estimation accuracy, precision and overall error were
evaluated in terms of bias, variance and root mean squared error and compared
against the MSE loss over a range of SNRs (5 - 30). Results: Networks trained
with NLR loss show higher estimation accuracy than MSE for the ADC and IVIM
diffusion coefficients as SNR decreases, with minimal loss of precision or
total error. At high effective SNR (high SNR and small diffusion coefficients),
both losses show comparable accuracy and precision for all parameters of both
models. Conclusion: The proposed NLR loss is numerically stable and accurate
across the full range of tested SNRs and improves parameter estimation accuracy
of diffusion coefficients using self-supervised deep learning. We expect the
development to benefit quantitative MR imaging techniques broadly, enabling
more accurate parameter estimation from noisy data.
Related papers
- Bias-Reduced Neural Networks for Parameter Estimation in Quantitative MRI [0.13654846342364307]
We develop neural network (NN)-based quantitative MRI parameter estimators with minimal bias and a variance close to the Cram'er-Rao bound.
arXiv Detail & Related papers (2023-11-13T20:41:48Z) - Preserved Edge Convolutional Neural Network for Sensitivity Enhancement
of Deuterium Metabolic Imaging (DMI) [10.884358837187243]
This work presents a deep learning method for sensitivity enhancement of Deuterium Metabolic Imaging (DMI)
A convolutional neural network (CNN) was designed to estimate the 2H-labeled metabolite concentrations from low SNR.
The estimation precision was further improved by fine-tuning the CNN with MRI-based edge-preserving regularization for each DMI dataset.
arXiv Detail & Related papers (2023-09-08T03:41:54Z) - UncertaINR: Uncertainty Quantification of End-to-End Implicit Neural
Representations for Computed Tomography [35.84136481440458]
Implicit neural representations (INRs) have achieved impressive results for scene reconstruction and computer graphics.
We study a Bayesian reformulation of INRs, UncertaINR, in the context of computed tomography.
We find that they achieve well-calibrated uncertainty, while retaining accuracy competitive with other classical, INR-based, and CNN-based reconstruction techniques.
arXiv Detail & Related papers (2022-02-22T12:19:03Z) - Improving evidential deep learning via multi-task learning [1.8275108630751844]
The objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation.
A multi-task learning framework, referred to as MT-ENet, is proposed to accomplish this aim.
The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks.
arXiv Detail & Related papers (2021-12-17T07:56:20Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Cram\'er-Rao bound-informed training of neural networks for quantitative
MRI [11.964144201247198]
Neural networks are increasingly used to estimate parameters in quantitative MRI, in particular in magnetic resonance fingerprinting.
Their advantages are their superior speed and their dominance of the non-efficient unbiased estimator.
We find, however, that heterogeneous parameters are hard to estimate.
We propose a well-founded Cram'erRao loss function, which normalizes the squared error with respective CRB.
arXiv Detail & Related papers (2021-09-22T06:38:03Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.