Bias-Reduced Neural Networks for Parameter Estimation in Quantitative MRI
- URL: http://arxiv.org/abs/2312.11468v3
- Date: Wed, 10 Apr 2024 15:58:09 GMT
- Title: Bias-Reduced Neural Networks for Parameter Estimation in Quantitative MRI
- Authors: Andrew Mao, Sebastian Flassbeck, Jakob Assländer,
- Abstract summary: We develop neural network (NN)-based quantitative MRI parameter estimators with minimal bias and a variance close to the Cram'er-Rao bound.
- Score: 0.13654846342364307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: To develop neural network (NN)-based quantitative MRI parameter estimators with minimal bias and a variance close to the Cram\'er-Rao bound. Theory and Methods: We generalize the mean squared error loss to control the bias and variance of the NN's estimates, which involves averaging over multiple noise realizations of the same measurements during training. Bias and variance properties of the resulting NNs are studied for two neuroimaging applications. Results: In simulations, the proposed strategy reduces the estimates' bias throughout parameter space and achieves a variance close to the Cram\'er-Rao bound. In vivo, we observe good concordance between parameter maps estimated with the proposed NNs and traditional estimators, such as non-linear least-squares fitting, while state-of-the-art NNs show larger deviations. Conclusion: The proposed NNs have greatly reduced bias compared to those trained using the mean squared error and offer significantly improved computational efficiency over traditional estimators with comparable or better accuracy.
Related papers
- Error estimation for physics-informed neural networks with implicit
Runge-Kutta methods [0.0]
In this work, we propose to use the NN's predictions in a high-order implicit Runge-Kutta (IRK) method.
The residuals in the implicit system of equations can be related to the NN's prediction error, hence, we can provide an error estimate at several points along a trajectory.
We find that this error estimate highly correlates with the NN's prediction error and that increasing the order of the IRK method improves this estimate.
arXiv Detail & Related papers (2024-01-10T15:18:56Z) - Rician likelihood loss for quantitative MRI using self-supervised deep
learning [4.937920705275674]
Previous quantitative MR imaging studies using self-supervised deep learning have reported biased parameter estimates at low SNR.
We introduce the negative log Rician likelihood (NLR) loss, which is numerically stable and accurate across the full range of tested SNRs.
We expect the development to benefit quantitative MR imaging techniques broadly, enabling more accurate estimation from noisy data.
arXiv Detail & Related papers (2023-07-13T21:42:26Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Structure and Distribution Metric for Quantifying the Quality of
Uncertainty: Assessing Gaussian Processes, Deep Neural Nets, and Deep Neural
Operators for Regression [0.0]
We propose two comparison metrics that may be implemented to arbitrary dimensions in regression tasks.
The structure metric assesses the similarity in shape and location of uncertainty with the true error, while the distribution metric quantifies the supported magnitudes between the two.
We apply these metrics to Gaussian Processes (GPs), Ensemble Deep Neural Nets (DNNs), and Ensemble Deep Neural Operators (DNOs) on high-dimensional and nonlinear test cases.
arXiv Detail & Related papers (2022-03-09T04:16:31Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Neural Estimation of Statistical Divergences [24.78742908726579]
A modern method for estimating statistical divergences relies on parametrizing an empirical variational form by a neural network (NN)
In particular, there is a fundamental tradeoff between the two sources of error involved: approximation and empirical estimation.
We show that neural estimators with a slightly different NN growth-rate are near minimax rate-optimal, achieving the parametric convergence rate up to logarithmic factors.
arXiv Detail & Related papers (2021-10-07T17:42:44Z) - Cram\'er-Rao bound-informed training of neural networks for quantitative
MRI [11.964144201247198]
Neural networks are increasingly used to estimate parameters in quantitative MRI, in particular in magnetic resonance fingerprinting.
Their advantages are their superior speed and their dominance of the non-efficient unbiased estimator.
We find, however, that heterogeneous parameters are hard to estimate.
We propose a well-founded Cram'erRao loss function, which normalizes the squared error with respective CRB.
arXiv Detail & Related papers (2021-09-22T06:38:03Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.