T$^2$LR-Net: An Unrolling Reconstruction Network Learning Transformed
Tensor Low-Rank prior for Dynamic MR Imaging
- URL: http://arxiv.org/abs/2209.03832v1
- Date: Thu, 8 Sep 2022 14:11:02 GMT
- Title: T$^2$LR-Net: An Unrolling Reconstruction Network Learning Transformed
Tensor Low-Rank prior for Dynamic MR Imaging
- Authors: Yinghao Zhang, Yue Hu
- Abstract summary: We introduce a flexible model based on TTNN with the ability to exploit the tensor low-rank prior of a transformed domain.
We also introduce a model-based deep unrolling reconstruction network to learn the transformed tensor low-rank prior.
The proposed framework can provide improved recovery results compared with the state-of-the-art optimization-based and unrolling network-based methods.
- Score: 6.101233798770526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the methods exploiting the tensor low-rank prior are booming in
high-dimensional data processing and have obtained satisfying performance,
their applications in dynamic magnetic resonance (MR) image reconstruction are
limited. In this paper, we concentrate on the tensor singular value
decomposition (t-SVD), which is based on the Fast Fourier Transform (FFT) and
only provides the definite and limited tensor low-rank prior in the FFT domain,
heavily reliant upon how closely the data and the FFT domain match up. By
generalizing the FFT into an arbitrary unitary transformation of the
transformed t-SVD and proposing the transformed tensor nuclear norm (TTNN), we
introduce a flexible model based on TTNN with the ability to exploit the tensor
low-rank prior of a transformed domain in a larger transformation space and
elaborately design an iterative optimization algorithm based on the alternating
direction method of multipliers (ADMM), which is further unrolled into a
model-based deep unrolling reconstruction network to learn the transformed
tensor low-rank prior (T$^2$LR-Net). The convolutional neural network (CNN) is
incorporated within the T$^2$LR-Net to learn the best-matched transform from
the dynamic MR image dataset. The unrolling reconstruction network also
provides a new perspective on the low-rank prior utilization by exploiting the
low-rank prior in the CNN-extracted feature domain. Experimental results on two
cardiac cine MR datasets demonstrate that the proposed framework can provide
improved recovery results compared with the state-of-the-art optimization-based
and unrolling network-based methods.
Related papers
- Deep Learning-based MRI Reconstruction with Artificial Fourier Transform (AFT)-Net [14.146848823672677]
We introduce a unified complex-valued deep learning framework-Artificial Fourier Transform Network (AFTNet)
AFTNet can be readily used to solve image inverse problems in domain transformation.
We show that AFTNet achieves superior accelerated MRI reconstruction compared to existing approaches.
arXiv Detail & Related papers (2023-12-18T02:50:45Z) - Image Reconstruction for Accelerated MR Scan with Faster Fourier
Convolutional Neural Networks [87.87578529398019]
Partial scan is a common approach to accelerate Magnetic Resonance Imaging (MRI) data acquisition in both 2D and 3D settings.
We propose a novel convolutional operator called Faster Fourier Convolution (FasterFC) to replace the two consecutive convolution operations.
A 2D accelerated MRI method, FasterFC-End-to-End-VarNet, which uses FasterFC to improve the sensitivity maps and reconstruction quality.
A 3D accelerated MRI method called FasterFC-based Single-to-group Network (FAS-Net) that utilizes a single-to-group algorithm to guide k-space domain reconstruction
arXiv Detail & Related papers (2023-06-05T13:53:57Z) - Neural Functional Transformers [99.98750156515437]
This paper uses the attention mechanism to define a novel set of permutation equivariant weight-space layers called neural functional Transformers (NFTs)
NFTs respect weight-space permutation symmetries while incorporating the advantages of attention, which have exhibited remarkable success across multiple domains.
We also leverage NFTs to develop Inr2Array, a novel method for computing permutation invariant representations from the weights of implicit neural representations (INRs)
arXiv Detail & Related papers (2023-05-22T23:38:27Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Affine Transformation Edited and Refined Deep Neural Network for
Quantitative Susceptibility Mapping [10.772763441035945]
We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for Quantitative Susceptibility Mapping (QSM)
It is robust against arbitrary acquisition orientation and spatial resolution up to 0.6 mm isotropic at the finest.
arXiv Detail & Related papers (2022-11-25T07:54:26Z) - Dynamic MRI using Learned Transform-based Deep Tensor Low-Rank Network
(DTLR-Net) [9.658908705889777]
We introduce a model-based deep learning network by learning the tensor low-rank prior to the cardiac dynamic MR images.
The proposed framework is able to provide improved recovery results compared with the state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-02T02:55:41Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Multi-Tensor Network Representation for High-Order Tensor Completion [25.759851542474447]
This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings.
We consider that a tensor is a superposition of multiple low-rank components.
In this paper, we propose a fundamental tensor decomposition framework: Multi-Tensor Network decomposition (MTNR)
arXiv Detail & Related papers (2021-09-09T03:50:19Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Deep Low-rank Prior in Dynamic MR Imaging [30.70648993986445]
We introduce two novel schemes to introduce the learnable low-rank prior into deep network architectures.
In the unrolling manner, we put forward a model-based unrolling sparse and low-rank network for dynamic MR imaging, dubbed SLR-Net.
In the plug-and-play manner, we present a plug-and-play LR network module that can be easily embedded into any other dynamic MR neural networks.
arXiv Detail & Related papers (2020-06-22T09:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.