Deep Low-rank plus Sparse Network for Dynamic MR Imaging
- URL: http://arxiv.org/abs/2010.13677v3
- Date: Tue, 20 Jul 2021 12:51:35 GMT
- Title: Deep Low-rank plus Sparse Network for Dynamic MR Imaging
- Authors: Wenqi Huang, Ziwen Ke, Zhuo-Xu Cui, Jing Cheng, Zhilang Qiu, Sen Jia,
Leslie Ying, Yanjie Zhu, Dong Liang
- Abstract summary: We propose a model-based low-rank plus sparse network, dubbed L+S-Net, for dynamic MR reconstruction.
Experiments on retrospective and prospective cardiac cine datasets show that the proposed model outperforms state-of-the-art CS and existing deep learning methods.
- Score: 18.09395940969876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In dynamic magnetic resonance (MR) imaging, low-rank plus sparse (L+S)
decomposition, or robust principal component analysis (PCA), has achieved
stunning performance. However, the selection of the parameters of L+S is
empirical, and the acceleration rate is limited, which are common failings of
iterative compressed sensing MR imaging (CS-MRI) reconstruction methods. Many
deep learning approaches have been proposed to address these issues, but few of
them use a low-rank prior. In this paper, a model-based low-rank plus sparse
network, dubbed L+S-Net, is proposed for dynamic MR reconstruction. In
particular, we use an alternating linearized minimization method to solve the
optimization problem with low-rank and sparse regularization. Learned soft
singular value thresholding is introduced to ensure the clear separation of the
L component and S component. Then, the iterative steps are unrolled into a
network in which the regularization parameters are learnable. We prove that the
proposed L+S-Net achieves global convergence under two standard assumptions.
Experiments on retrospective and prospective cardiac cine datasets show that
the proposed model outperforms state-of-the-art CS and existing deep learning
methods and has great potential for extremely high acceleration factors (up to
24x).
Related papers
- Dynamic MRI reconstruction using low-rank plus sparse decomposition with
smoothness regularization [13.784906186556016]
We propose a smoothness-regularized L+S (SR-L+S) model for dMRI reconstruction from highly undersampled k-t-space data.
We exploit joint low-rank and smooth priors on the background component of dMRI to better capture both its global and local temporal correlated structures.
arXiv Detail & Related papers (2024-01-30T11:52:35Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image
Super-Resolution [15.694407977871341]
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels.
We propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR.
arXiv Detail & Related papers (2022-12-15T04:34:57Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Meta-Learning based Degradation Representation for Blind
Super-Resolution [54.93926549648434]
We propose a Meta-Learning based Region Degradation Aware SR Network (MRDA)
We use the MRDA to rapidly adapt to the specific complex degradation after several iterations and extract implicit degradation information.
A teacher network MRDA$_T$ is designed to further utilize the degradation information extracted by MLN for SR.
arXiv Detail & Related papers (2022-07-28T09:03:00Z) - PS-Net: Deep Partially Separable Modelling for Dynamic Magnetic
Resonance Imaging [6.974773529651233]
We propose a learned low-rank method for dynamic MR imaging.
Experiments on the cardiac cine dataset show that the proposed model outperforms the state-of-the-art compressed sensing (CS) methods.
arXiv Detail & Related papers (2022-05-09T07:06:02Z) - Unsupervised Single Image Super-resolution Under Complex Noise [60.566471567837574]
This paper proposes a model-based unsupervised SISR method to deal with the general SISR task with unknown degradations.
The proposed method can evidently surpass the current state of the art (SotA) method (about 1dB PSNR) not only with a slighter model (0.34M vs. 2.40M) but also faster speed.
arXiv Detail & Related papers (2021-07-02T11:55:40Z) - LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single
Image Super-Resolution and Beyond [75.37541439447314]
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version.
This paper proposes a linearly-assembled pixel-adaptive regression network (LAPAR) to strike a sweet spot of deep model complexity and resulting SISR quality.
arXiv Detail & Related papers (2021-05-21T15:47:18Z) - Neural Network-based Reconstruction in Compressed Sensing MRI Without
Fully-sampled Training Data [17.415937218905125]
CS-MRI has shown promise in reconstructing under-sampled MR images.
Deep learning models have been developed that model the iterative nature of classical techniques by unrolling iterations in a neural network.
In this paper, we explore a novel strategy to train an unrolled reconstruction network in an unsupervised fashion by adopting a loss function widely-used in classical optimization schemes.
arXiv Detail & Related papers (2020-07-29T17:46:55Z) - Deep Low-rank Prior in Dynamic MR Imaging [30.70648993986445]
We introduce two novel schemes to introduce the learnable low-rank prior into deep network architectures.
In the unrolling manner, we put forward a model-based unrolling sparse and low-rank network for dynamic MR imaging, dubbed SLR-Net.
In the plug-and-play manner, we present a plug-and-play LR network module that can be easily embedded into any other dynamic MR neural networks.
arXiv Detail & Related papers (2020-06-22T09:26:10Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.