T$^2$LR-Net: An Unrolling Reconstruction Network Learning Transformed
Tensor Low-Rank prior for Dynamic MR Imaging
- URL: http://arxiv.org/abs/2209.03832v1
- Date: Thu, 8 Sep 2022 14:11:02 GMT
- Title: T$^2$LR-Net: An Unrolling Reconstruction Network Learning Transformed
Tensor Low-Rank prior for Dynamic MR Imaging
- Authors: Yinghao Zhang, Yue Hu
- Abstract summary: We introduce a flexible model based on TTNN with the ability to exploit the tensor low-rank prior of a transformed domain.
We also introduce a model-based deep unrolling reconstruction network to learn the transformed tensor low-rank prior.
The proposed framework can provide improved recovery results compared with the state-of-the-art optimization-based and unrolling network-based methods.
- Score: 6.101233798770526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the methods exploiting the tensor low-rank prior are booming in
high-dimensional data processing and have obtained satisfying performance,
their applications in dynamic magnetic resonance (MR) image reconstruction are
limited. In this paper, we concentrate on the tensor singular value
decomposition (t-SVD), which is based on the Fast Fourier Transform (FFT) and
only provides the definite and limited tensor low-rank prior in the FFT domain,
heavily reliant upon how closely the data and the FFT domain match up. By
generalizing the FFT into an arbitrary unitary transformation of the
transformed t-SVD and proposing the transformed tensor nuclear norm (TTNN), we
introduce a flexible model based on TTNN with the ability to exploit the tensor
low-rank prior of a transformed domain in a larger transformation space and
elaborately design an iterative optimization algorithm based on the alternating
direction method of multipliers (ADMM), which is further unrolled into a
model-based deep unrolling reconstruction network to learn the transformed
tensor low-rank prior (T$^2$LR-Net). The convolutional neural network (CNN) is
incorporated within the T$^2$LR-Net to learn the best-matched transform from
the dynamic MR image dataset. The unrolling reconstruction network also
provides a new perspective on the low-rank prior utilization by exploiting the
low-rank prior in the CNN-extracted feature domain. Experimental results on two
cardiac cine MR datasets demonstrate that the proposed framework can provide
improved recovery results compared with the state-of-the-art optimization-based
and unrolling network-based methods.
Related papers
- JotlasNet: Joint Tensor Low-Rank and Attention-based Sparse Unrolling Network for Accelerating Dynamic MRI [6.081607038128913]
We propose a novel deep unrolling network, JotlasNet, for dynamic MRI reconstruction.
Joint low-rank and sparse unrolling networks have shown superior performance in dynamic MRI reconstruction.
arXiv Detail & Related papers (2025-02-17T12:43:04Z) - OTLRM: Orthogonal Learning-based Low-Rank Metric for Multi-Dimensional Inverse Problems [14.893020063373022]
We introduce a novel data-driven generative low-rank t-SVD model based on the learnable orthogonal transform.
We also propose a low-rank solver as a generalization of SVT, which utilizes an efficient representation of generative networks to obtain low-rank structures.
arXiv Detail & Related papers (2024-12-15T12:28:57Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Affine Transformation Edited and Refined Deep Neural Network for
Quantitative Susceptibility Mapping [10.772763441035945]
We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for Quantitative Susceptibility Mapping (QSM)
It is robust against arbitrary acquisition orientation and spatial resolution up to 0.6 mm isotropic at the finest.
arXiv Detail & Related papers (2022-11-25T07:54:26Z) - Dynamic MRI using Learned Transform-based Deep Tensor Low-Rank Network
(DTLR-Net) [9.658908705889777]
We introduce a model-based deep learning network by learning the tensor low-rank prior to the cardiac dynamic MR images.
The proposed framework is able to provide improved recovery results compared with the state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-02T02:55:41Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Deep Low-rank Prior in Dynamic MR Imaging [30.70648993986445]
We introduce two novel schemes to introduce the learnable low-rank prior into deep network architectures.
In the unrolling manner, we put forward a model-based unrolling sparse and low-rank network for dynamic MR imaging, dubbed SLR-Net.
In the plug-and-play manner, we present a plug-and-play LR network module that can be easily embedded into any other dynamic MR neural networks.
arXiv Detail & Related papers (2020-06-22T09:26:10Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.