Learned Proximal Networks for Quantitative Susceptibility Mapping
- URL: http://arxiv.org/abs/2008.05024v1
- Date: Tue, 11 Aug 2020 22:35:24 GMT
- Title: Learned Proximal Networks for Quantitative Susceptibility Mapping
- Authors: Kuo-Wei Lai, Manisha Aggarwal, Peter van Zijl, Xu Li, Jeremias Sulam
- Abstract summary: We present a Learned Proximal Convolutional Neural Network (LP-CNN) for solving the ill-posed QSM dipole inversion problem.
This framework is believed to be the first deep learning QSM approach that can naturally handle an arbitrary number of phase input measurements.
- Score: 9.061630971752464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantitative Susceptibility Mapping (QSM) estimates tissue magnetic
susceptibility distributions from Magnetic Resonance (MR) phase measurements by
solving an ill-posed dipole inversion problem. Conventional single orientation
QSM methods usually employ regularization strategies to stabilize such
inversion, but may suffer from streaking artifacts or over-smoothing. Multiple
orientation QSM such as calculation of susceptibility through multiple
orientation sampling (COSMOS) can give well-conditioned inversion and an
artifact free solution but has expensive acquisition costs. On the other hand,
Convolutional Neural Networks (CNN) show great potential for medical image
reconstruction, albeit often with limited interpretability. Here, we present a
Learned Proximal Convolutional Neural Network (LP-CNN) for solving the
ill-posed QSM dipole inversion problem in an iterative proximal gradient
descent fashion. This approach combines the strengths of data-driven
restoration priors and the clear interpretability of iterative solvers that can
take into account the physical model of dipole convolution. During training,
our LP-CNN learns an implicit regularizer via its proximal, enabling the
decoupling between the forward operator and the data-driven parameters in the
reconstruction algorithm. More importantly, this framework is believed to be
the first deep learning QSM approach that can naturally handle an arbitrary
number of phase input measurements without the need for any ad-hoc rotation or
re-training. We demonstrate that the LP-CNN provides state-of-the-art
reconstruction results compared to both traditional and deep learning methods
while allowing for more flexibility in the reconstruction process.
Related papers
- Affine Transformation Edited and Refined Deep Neural Network for
Quantitative Susceptibility Mapping [10.772763441035945]
We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for Quantitative Susceptibility Mapping (QSM)
It is robust against arbitrary acquisition orientation and spatial resolution up to 0.6 mm isotropic at the finest.
arXiv Detail & Related papers (2022-11-25T07:54:26Z) - Stable Deep MRI Reconstruction using Generative Priors [13.400444194036101]
We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only.
The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods.
arXiv Detail & Related papers (2022-10-25T08:34:29Z) - DeepSTI: Towards Tensor Reconstruction using Fewer Orientations in
Susceptibility Tensor Imaging [9.79660375437555]
Susceptibility tensor imaging (STI) is an emerging magnetic resonance imaging technique that characterizes the anisotropic tissue magnetic susceptibility with a second-order tensor model.
STI has the potential to provide information for the reconstruction of white matter fiber pathways and detection of myelin changes in the brain at mm resolution or less.
However, the application of STI in vivo has been hindered by its cumbersome and time-consuming acquisition requirement of measuring susceptibility induced MR phase changes.
arXiv Detail & Related papers (2022-09-09T20:03:53Z) - MA-RECON: Mask-aware deep-neural-network for robust fast MRI k-space
interpolation [3.0821115746307672]
High-quality reconstruction of MRI images from under-sampled kspace' data is crucial for shortening MRI acquisition times and ensuring superior temporal resolution.
This paper introduces MA-RECON', an innovative mask-aware deep neural network (DNN) architecture and associated training method.
It implements a tailored training approach that leverages data generated with a variety of under-sampling masks to stimulate the model's generalization of the under-sampled MRI reconstruction problem.
arXiv Detail & Related papers (2022-08-31T15:57:38Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - CycleQSM: Unsupervised QSM Deep Learning using Physics-Informed CycleGAN [23.80331349122883]
We propose a novel unsupervised QSM deep learning method using physics-informed cycleGAN.
In contrast to the conventional cycleGAN, our novel cycleGAN has only one generator and one discriminator thanks to the known dipole kernel.
Experimental results confirm that the proposed method provides more accurate QSM maps compared to the existing deep learning approaches.
arXiv Detail & Related papers (2020-12-07T16:46:15Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.