Deep Manifold Learning for Dynamic MR Imaging
- URL: http://arxiv.org/abs/2104.01102v1
- Date: Tue, 9 Mar 2021 02:18:08 GMT
- Title: Deep Manifold Learning for Dynamic MR Imaging
- Authors: Ziwen Ke, Zhuo-Xu Cui, Wenqi Huang, Jing Cheng, Sen Jia, Haifeng Wang,
Xin Liu, Hairong Zheng, Leslie Ying, Yanjie Zhu, Dong Liang
- Abstract summary: We develop a deep learning method on a nonlinear manifold to explore the temporal redundancy of dynamic signals to reconstruct cardiac MRI data.
The proposed method can obtain improved reconstruction compared with a compressed sensing (CS) method k-t SLR and two state-of-the-art deep learning-based methods, DC-CNN and CRNN.
- Score: 30.70648993986445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: To develop a deep learning method on a nonlinear manifold to explore
the temporal redundancy of dynamic signals to reconstruct cardiac MRI data from
highly undersampled measurements.
Methods: Cardiac MR image reconstruction is modeled as general compressed
sensing (CS) based optimization on a low-rank tensor manifold. The nonlinear
manifold is designed to characterize the temporal correlation of dynamic
signals. Iterative procedures can be obtained by solving the optimization model
on the manifold, including gradient calculation, projection of the gradient to
tangent space, and retraction of the tangent space to the manifold. The
iterative procedures on the manifold are unrolled to a neural network, dubbed
as Manifold-Net. The Manifold-Net is trained using in vivo data with a
retrospective electrocardiogram (ECG)-gated segmented bSSFP sequence.
Results: Experimental results at high accelerations demonstrate that the
proposed method can obtain improved reconstruction compared with a compressed
sensing (CS) method k-t SLR and two state-of-the-art deep learning-based
methods, DC-CNN and CRNN.
Conclusion: This work represents the first study unrolling the optimization
on manifolds into neural networks. Specifically, the designed low-rank manifold
provides a new technical route for applying low-rank priors in dynamic MR
imaging.
Related papers
- MIPS-Fusion: Multi-Implicit-Submaps for Scalable and Robust Online
Neural RGB-D Reconstruction [15.853932110058585]
We introduce a robust and scalable online RGB-D reconstruction method based on a novel neural implicit representation -- multi-implicit-submap.
In our method, neural submaps are incrementally allocated alongside the scanning trajectory and efficiently learned with local neural bundle adjustments.
For the first time, randomized optimization is made possible in neural tracking with several key designs to the learning process, enabling efficient and robust tracking even under fast camera motions.
arXiv Detail & Related papers (2023-08-17T02:33:16Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - PS-Net: Deep Partially Separable Modelling for Dynamic Magnetic
Resonance Imaging [6.974773529651233]
We propose a learned low-rank method for dynamic MR imaging.
Experiments on the cardiac cine dataset show that the proposed model outperforms the state-of-the-art compressed sensing (CS) methods.
arXiv Detail & Related papers (2022-05-09T07:06:02Z) - Learning Optimal K-space Acquisition and Reconstruction using
Physics-Informed Neural Networks [46.751292014516025]
Deep neural networks have been applied to reconstruct undersampled k-space data and have shown improved reconstruction performance.
This work proposes a novel framework to learn k-space sampling trajectories by considering it as an Ordinary Differential Equation (ODE) problem.
Experiments were conducted on different in-viv datasets (textite.g., brain and knee images) acquired with different sequences.
arXiv Detail & Related papers (2022-04-05T20:28:42Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Deep Low-rank Prior in Dynamic MR Imaging [30.70648993986445]
We introduce two novel schemes to introduce the learnable low-rank prior into deep network architectures.
In the unrolling manner, we put forward a model-based unrolling sparse and low-rank network for dynamic MR imaging, dubbed SLR-Net.
In the plug-and-play manner, we present a plug-and-play LR network module that can be easily embedded into any other dynamic MR neural networks.
arXiv Detail & Related papers (2020-06-22T09:26:10Z) - Geometric Approaches to Increase the Expressivity of Deep Neural
Networks for MR Reconstruction [41.62169556793355]
Deep learning approaches have been extensively investigated to reconstruct images from accelerated magnetic resonance image (MRI) acquisition.
It is not clear how to choose a suitable network architecture to balance the trade-off between network complexity and performance.
This paper proposes a systematic geometric approach using bootstrapping and subnetwork aggregation to increase the expressivity of the underlying neural network.
arXiv Detail & Related papers (2020-03-17T14:18:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.