Swin Transformer for Fast MRI
- URL: http://arxiv.org/abs/2201.03230v1
- Date: Mon, 10 Jan 2022 09:32:32 GMT
- Title: Swin Transformer for Fast MRI
- Authors: Jiahao Huang, Yingying Fang, Yinzhe Wu, Huanjun Wu, Zhifan Gao, Yang
Li, Javier Del Ser, Jun Xia, Guang Yang
- Abstract summary: SwinMR is a novel Swin transformer based method for fast MRI reconstruction.
Network consisted of an input module (IM), a feature extraction module (FE) and an output module (OM)
- Score: 12.28925347961542
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Magnetic resonance imaging (MRI) is an important non-invasive clinical tool
that can produce high-resolution and reproducible images. However, a long
scanning time is required for high-quality MR images, which leads to exhaustion
and discomfort of patients, inducing more artefacts due to voluntary movements
of the patients and involuntary physiological movements. To accelerate the
scanning process, methods by k-space undersampling and deep learning based
reconstruction have been popularised. This work introduced SwinMR, a novel Swin
transformer based method for fast MRI reconstruction. The whole network
consisted of an input module (IM), a feature extraction module (FEM) and an
output module (OM). The IM and OM were 2D convolutional layers and the FEM was
composed of a cascaded of residual Swin transformer blocks (RSTBs) and 2D
convolutional layers. The RSTB consisted of a series of Swin transformer layers
(STLs). The shifted windows multi-head self-attention (W-MSA/SW-MSA) of STL was
performed in shifted windows rather than the multi-head self-attention (MSA) of
the original transformer in the whole image space. A novel multi-channel loss
was proposed by using the sensitivity maps, which was proved to reserve more
textures and details. We performed a series of comparative studies and ablation
studies in the Calgary-Campinas public brain MR dataset and conducted a
downstream segmentation experiment in the Multi-modal Brain Tumour Segmentation
Challenge 2017 dataset. The results demonstrate our SwinMR achieved
high-quality reconstruction compared with other benchmark methods, and it shows
great robustness with different undersampling masks, under noise interruption
and on different datasets. The code is publicly available at
https://github.com/ayanglab/SwinMR.
Related papers
- NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Learning Dynamic MRI Reconstruction with Convolutional Network Assisted
Reconstruction Swin Transformer [0.7802769338493889]
We propose a novel architecture named Reconstruction Swin Transformer (RST) for 4D MRI.
RST inherits the backbone design of the Video Swin Transformer with a novel reconstruction head introduced to restore pixel-wise intensity.
Experimental results in the cardiac 4D MR dataset further substantiate the superiority of RST.
arXiv Detail & Related papers (2023-09-19T00:42:45Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Multi-head Cascaded Swin Transformers with Attention to k-space Sampling
Pattern for Accelerated MRI Reconstruction [16.44971774468092]
We propose a physics-based stand-alone (convolution free) transformer model titled, the Multi-head Cascaded Swin Transformers (McSTRA) for accelerated MRI reconstruction.
Our model significantly outperforms state-of-the-art MRI reconstruction methods both visually and quantitatively.
arXiv Detail & Related papers (2022-07-18T07:21:56Z) - Fast MRI Reconstruction: How Powerful Transformers Are? [1.523157765626545]
Methods by k-space undersampling and deep learning based reconstruction have been popularised to accelerate the scanning process.
In particular, a generative adversarial network (GAN) based Swin transformer (ST-GAN) was introduced for the fast MRI reconstruction.
We show that transformers work well for the MRI reconstruction from different undersampling conditions.
arXiv Detail & Related papers (2022-01-23T23:41:48Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - ResViT: Residual vision transformers for multi-modal medical image
synthesis [0.0]
We propose a novel generative adversarial approach for medical image synthesis, ResViT, to combine local precision of convolution operators with contextual sensitivity of vision transformers.
Our results indicate the superiority of ResViT against competing methods in terms of qualitative observations and quantitative metrics.
arXiv Detail & Related papers (2021-06-30T12:57:37Z) - Generative Adversarial Networks (GAN) Powered Fast Magnetic Resonance
Imaging -- Mini Review, Comparison and Perspectives [5.3148259096171175]
One drawback of MRI is its comparatively slow scanning and reconstruction compared to other image modalities.
Deep Neural Networks (DNNs) have been used in sparse MRI reconstruction models to recreate relatively high-quality images.
Generative Adversarial Networks (GAN) based methods are proposed to solve fast MRI with enhanced image perceptual quality.
arXiv Detail & Related papers (2021-05-04T23:59:00Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.