Learning Bloch Simulations for MR Fingerprinting by Invertible Neural
Networks
- URL: http://arxiv.org/abs/2008.04139v2
- Date: Wed, 10 Mar 2021 12:32:49 GMT
- Title: Learning Bloch Simulations for MR Fingerprinting by Invertible Neural
Networks
- Authors: Fabian Balsiger, Alain Jungo, Olivier Scheidegger, Benjamin Marty,
Mauricio Reyes
- Abstract summary: Intrepid neural networks (INNs) might be a feasible alternative to the current solely backward-based NNs for MRF reconstruction.
INNs might be a feasible alternative to the current solely backward-based NNs for MRF reconstruction.
- Score: 0.8399688944263843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic resonance fingerprinting (MRF) enables fast and multiparametric MR
imaging. Despite fast acquisition, the state-of-the-art reconstruction of MRF
based on dictionary matching is slow and lacks scalability. To overcome these
limitations, neural network (NN) approaches estimating MR parameters from
fingerprints have been proposed recently. Here, we revisit NN-based MRF
reconstruction to jointly learn the forward process from MR parameters to
fingerprints and the backward process from fingerprints to MR parameters by
leveraging invertible neural networks (INNs). As a proof-of-concept, we perform
various experiments showing the benefit of learning the forward process, i.e.,
the Bloch simulations, for improved MR parameter estimation. The benefit
especially accentuates when MR parameter estimation is difficult due to MR
physical restrictions. Therefore, INNs might be a feasible alternative to the
current solely backward-based NNs for MRF reconstruction.
Related papers
- Hardware acceleration for ultra-fast Neural Network training on FPGA for MRF map reconstruction [67.75494660740776]
We propose an FPGA-based NN for real-time brain parameter reconstruction from MRF data.<n>This method could enable real-time brain analysis on mobile devices, revolutionizing clinical decision-making and telemedicine.
arXiv Detail & Related papers (2025-06-27T12:09:35Z) - Fast Training of Recurrent Neural Networks with Stationary State Feedbacks [48.22082789438538]
Recurrent neural networks (RNNs) have recently demonstrated strong performance and faster inference than Transformers.
We propose a novel method that replaces BPTT with a fixed gradient feedback mechanism.
arXiv Detail & Related papers (2025-03-29T14:45:52Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - MARVEL: MR Fingerprinting with Additional micRoVascular Estimates using bidirectional LSTMs [0.8901227918730564]
We propose an efficient way to simulate the MR signal coming from numerical voxels containing realistic microvascular networks.
Our results on 3 human volunteers suggest that our approach can quickly produce high-quality quantitative maps of microvascular parameters.
arXiv Detail & Related papers (2024-07-15T08:09:54Z) - Understanding the Convergence in Balanced Resonate-and-Fire Neurons [1.4186974630564675]
Resonate-and-Fire (RF) neurons are an interesting complementary model for integrator neurons in spiking neural networks (SNNs)
The recently proposed balanced resonate-and-fire (BRF) neuron marked a significant methodological advance in terms of task performance, spiking and parameter efficiency.
This paper aims at providing further intuitions about how and why these convergence advantages emerge.
arXiv Detail & Related papers (2024-06-01T10:04:55Z) - Joint MR sequence optimization beats pure neural network approaches for
spin-echo MRI super-resolution [44.52688267348063]
Current MRI super-resolution (SR) methods only use existing contrasts acquired from typical clinical sequences as input for the neural network (NN)
We propose a known-operator learning approach to perform an end-to-end optimization of MR sequence and neural net-work parameters for SR-TSE.
arXiv Detail & Related papers (2023-05-12T14:40:25Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Learned Proximal Networks for Quantitative Susceptibility Mapping [9.061630971752464]
We present a Learned Proximal Convolutional Neural Network (LP-CNN) for solving the ill-posed QSM dipole inversion problem.
This framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements.
arXiv Detail & Related papers (2020-08-11T22:35:24Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Compressive MR Fingerprinting reconstruction with Neural Proximal
Gradient iterations [27.259916894535404]
ProxNet is a learned proximal gradient descent framework that incorporates the forward acquisition and Bloch dynamic models within a recurrent learning mechanism.
Our numerical experiments show that the ProxNet can achieve a superior quantitative inference accuracy, much smaller storage requirement, and a comparable runtime to the recent deep learning MRF baselines.
arXiv Detail & Related papers (2020-06-27T03:52:22Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.