Fast MRI Reconstruction: How Powerful Transformers Are?
- URL: http://arxiv.org/abs/2201.09400v1
- Date: Sun, 23 Jan 2022 23:41:48 GMT
- Title: Fast MRI Reconstruction: How Powerful Transformers Are?
- Authors: Jiahao Huang, Yinzhe Wu, Huanjun Wu, Guang Yang
- Abstract summary: Methods by k-space undersampling and deep learning based reconstruction have been popularised to accelerate the scanning process.
In particular, a generative adversarial network (GAN) based Swin transformer (ST-GAN) was introduced for the fast MRI reconstruction.
We show that transformers work well for the MRI reconstruction from different undersampling conditions.
- Score: 1.523157765626545
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Magnetic resonance imaging (MRI) is a widely used non-radiative and
non-invasive method for clinical interrogation of organ structures and
metabolism, with an inherently long scanning time. Methods by k-space
undersampling and deep learning based reconstruction have been popularised to
accelerate the scanning process. This work focuses on investigating how
powerful transformers are for fast MRI by exploiting and comparing different
novel network architectures. In particular, a generative adversarial network
(GAN) based Swin transformer (ST-GAN) was introduced for the fast MRI
reconstruction. To further preserve the edge and texture information, edge
enhanced GAN based Swin transformer (EESGAN) and texture enhanced GAN based
Swin transformer (TES-GAN) were also developed, where a dual-discriminator GAN
structure was applied. We compared our proposed GAN based transformers,
standalone Swin transformer and other convolutional neural networks based based
GAN model in terms of the evaluation metrics PSNR, SSIM and FID. We showed that
transformers work well for the MRI reconstruction from different undersampling
conditions. The utilisation of GAN's adversarial structure improves the quality
of images reconstructed when undersampled for 30% or higher.
Related papers
- Polyhedra Encoding Transformers: Enhancing Diffusion MRI Analysis Beyond Voxel and Volumetric Embedding [9.606654786275902]
In this paper, we propose a novel method called Polyhedra Transformer (PE-Transformer) for dMRI, designed specifically to handle spherical signals.
Our approach involves projecting an icosahedral unit sphere to resample signals from predetermined directions. These resampled signals are then transformed into embeddings, which are processed by a transformer encoder that incorporates orientational information reflective of the icosahedral structure.
arXiv Detail & Related papers (2025-01-23T03:32:52Z) - MambaRecon: MRI Reconstruction with Structured State Space Models [30.506544165999564]
The advent of deep learning has catalyzed the development of cutting-edge methods for the expedited reconstruction of MRI scans.
We propose an innovative MRI reconstruction framework that employs structured state space models at its core, aimed at amplifying both long-range contextual sensitivity and reconstruction efficacy.
arXiv Detail & Related papers (2024-09-19T01:50:10Z) - Transformer and GAN Based Super-Resolution Reconstruction Network for
Medical Images [0.0]
Super-resolution reconstruction in medical imaging has become more popular (MRI)
In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN)
The integrated system can extract more precise texture information and focus more on important locations through global image matching.
arXiv Detail & Related papers (2022-12-26T09:52:12Z) - Multi-head Cascaded Swin Transformers with Attention to k-space Sampling
Pattern for Accelerated MRI Reconstruction [16.44971774468092]
We propose a physics-based stand-alone (convolution free) transformer model titled, the Multi-head Cascaded Swin Transformers (McSTRA) for accelerated MRI reconstruction.
Our model significantly outperforms state-of-the-art MRI reconstruction methods both visually and quantitatively.
arXiv Detail & Related papers (2022-07-18T07:21:56Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - ReconFormer: Accelerated MRI Reconstruction Using Recurrent Transformer [60.27951773998535]
We propose a recurrent transformer model, namely textbfReconFormer, for MRI reconstruction.
It can iteratively reconstruct high fertility magnetic resonance images from highly under-sampled k-space data.
We show that it achieves significant improvements over the state-of-the-art methods with better parameter efficiency.
arXiv Detail & Related papers (2022-01-23T21:58:19Z) - Swin Transformer for Fast MRI [12.28925347961542]
SwinMR is a novel Swin transformer based method for fast MRI reconstruction.
Network consisted of an input module (IM), a feature extraction module (FE) and an output module (OM)
arXiv Detail & Related papers (2022-01-10T09:32:32Z) - Reference-based Magnetic Resonance Image Reconstruction Using Texture
Transforme [86.6394254676369]
We propose a novel Texture Transformer Module (TTM) for accelerated MRI reconstruction.
We formulate the under-sampled data and reference data as queries and keys in a transformer.
The proposed TTM can be stacked on prior MRI reconstruction approaches to further improve their performance.
arXiv Detail & Related papers (2021-11-18T03:06:25Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.