Cram\'er-Rao bound-informed training of neural networks for quantitative
MRI
- URL: http://arxiv.org/abs/2109.10535v1
- Date: Wed, 22 Sep 2021 06:38:03 GMT
- Title: Cram\'er-Rao bound-informed training of neural networks for quantitative
MRI
- Authors: Xiaoxia Zhang, Quentin Duchemin, Kangning Liu, Sebastian Flassbeck,
Cem Gultekin, Carlos Fernandez-Granda, Jakob Assl\"ander
- Abstract summary: Neural networks are increasingly used to estimate parameters in quantitative MRI, in particular in magnetic resonance fingerprinting.
Their advantages are their superior speed and their dominance of the non-efficient unbiased estimator.
We find, however, that heterogeneous parameters are hard to estimate.
We propose a well-founded Cram'erRao loss function, which normalizes the squared error with respective CRB.
- Score: 11.964144201247198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are increasingly used to estimate parameters in quantitative
MRI, in particular in magnetic resonance fingerprinting. Their advantages over
the gold standard non-linear least square fitting are their superior speed and
their immunity to the non-convexity of many fitting problems. We find, however,
that in heterogeneous parameter spaces, i.e. in spaces in which the variance of
the estimated parameters varies considerably, good performance is hard to
achieve and requires arduous tweaking of the loss function, hyper parameters,
and the distribution of the training data in parameter space. Here, we address
these issues with a theoretically well-founded loss function: the Cram\'er-Rao
bound (CRB) provides a theoretical lower bound for the variance of an unbiased
estimator and we propose to normalize the squared error with respective CRB.
With this normalization, we balance the contributions of hard-to-estimate and
not-so-hard-to-estimate parameters and areas in parameter space, and avoid a
dominance of the former in the overall training loss. Further, the CRB-based
loss function equals one for a maximally-efficient unbiased estimator, which we
consider the ideal estimator. Hence, the proposed CRB-based loss function
provides an absolute evaluation metric. We compare a network trained with the
CRB-based loss with a network trained with the commonly used means squared
error loss and demonstrate the advantages of the former in numerical, phantom,
and in vivo experiments.
Related papers
- Neural Network Approximation for Pessimistic Offline Reinforcement
Learning [17.756108291816908]
We present a non-asymptotic estimation error of pessimistic offline RL using general neural network approximation.
Our result shows that the estimation error consists of two parts: the first converges to zero at a desired rate on the sample size with partially controllable concentrability, and the second becomes negligible if the residual constraint is tight.
arXiv Detail & Related papers (2023-12-19T05:17:27Z) - Bias-Reduced Neural Networks for Parameter Estimation in Quantitative MRI [0.13654846342364307]
We develop neural network (NN)-based quantitative MRI parameter estimators with minimal bias and a variance close to the Cram'er-Rao bound.
arXiv Detail & Related papers (2023-11-13T20:41:48Z) - Rician likelihood loss for quantitative MRI using self-supervised deep
learning [4.937920705275674]
Previous quantitative MR imaging studies using self-supervised deep learning have reported biased parameter estimates at low SNR.
We introduce the negative log Rician likelihood (NLR) loss, which is numerically stable and accurate across the full range of tested SNRs.
We expect the development to benefit quantitative MR imaging techniques broadly, enabling more accurate estimation from noisy data.
arXiv Detail & Related papers (2023-07-13T21:42:26Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Gaining Outlier Resistance with Progressive Quantiles: Fast Algorithms
and Theoretical Studies [1.6457778420360534]
A framework of outlier-resistant estimation is introduced to robustify arbitrarily loss function.
A new technique is proposed to alleviate the requirement on starting point such that on regular datasets the number of data reestimations can be substantially reduced.
The obtained estimators, though not necessarily globally or even globally, enjoymax optimality in both low dimensions.
arXiv Detail & Related papers (2021-12-15T20:35:21Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - Nonconvex sparse regularization for deep neural networks and its
optimality [1.9798034349981162]
Deep neural network (DNN) estimators can attain optimal convergence rates for regression and classification problems.
We propose a novel penalized estimation method for sparse DNNs.
We prove that the sparse-penalized estimator can adaptively attain minimax convergence rates for various nonparametric regression problems.
arXiv Detail & Related papers (2020-03-26T07:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.