TransMRSR: Transformer-based Self-Distilled Generative Prior for Brain
MRI Super-Resolution
- URL: http://arxiv.org/abs/2306.06669v1
- Date: Sun, 11 Jun 2023 12:41:23 GMT
- Title: TransMRSR: Transformer-based Self-Distilled Generative Prior for Brain
MRI Super-Resolution
- Authors: Shan Huang, Xiaohong Liu, Tao Tan, Menghan Hu, Xiaoer Wei, Tingli
Chen, Bin Sheng
- Abstract summary: We propose a novel two-stage network for brain MRI SR named TransMRSR.
TransMRSR consists of three modules: the shallow local feature extraction, the deep non-local feature capture, and the HR image reconstruction.
Our method achieves superior performance to other SSIR methods on both public and private datasets.
- Score: 18.201980634509553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic resonance images (MRI) acquired with low through-plane resolution
compromise time and cost. The poor resolution in one orientation is
insufficient to meet the requirement of high resolution for early diagnosis of
brain disease and morphometric study. The common Single image super-resolution
(SISR) solutions face two main challenges: (1) local detailed and global
anatomical structural information combination; and (2) large-scale restoration
when applied for reconstructing thick-slice MRI into high-resolution (HR)
iso-tropic data. To address these problems, we propose a novel two-stage
network for brain MRI SR named TransMRSR based on the convolutional blocks to
extract local information and transformer blocks to capture long-range
dependencies. TransMRSR consists of three modules: the shallow local feature
extraction, the deep non-local feature capture, and the HR image
reconstruction. We perform a generative task to encapsulate diverse priors into
a generative network (GAN), which is the decoder sub-module of the deep
non-local feature capture part, in the first stage. The pre-trained GAN is used
for the second stage of SR task. We further eliminate the potential latent
space shift caused by the two-stage training strategy through the
self-distilled truncation trick. The extensive experiments show that our method
achieves superior performance to other SSIR methods on both public and private
datasets. Code is released at https://github.com/goddesshs/TransMRSR.git .
Related papers
- Frequency-Assisted Mamba for Remote Sensing Image Super-Resolution [49.902047563260496]
We develop the first attempt to integrate the Vision State Space Model (Mamba) for remote sensing image (RSI) super-resolution.
To achieve better SR reconstruction, building upon Mamba, we devise a Frequency-assisted Mamba framework, dubbed FMSR.
Our FMSR features a multi-level fusion architecture equipped with the Frequency Selection Module (FSM), Vision State Space Module (VSSM), and Hybrid Gate Module (HGM)
arXiv Detail & Related papers (2024-05-08T11:09:24Z) - Resolution- and Stimulus-agnostic Super-Resolution of Ultra-High-Field Functional MRI: Application to Visual Studies [1.8327547104097965]
High-resolution fMRI provides a window into the brain's mesoscale organization.
Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio.
This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI.
arXiv Detail & Related papers (2023-11-25T03:33:36Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - InverseSR: 3D Brain MRI Super-Resolution Using a Latent Diffusion Model [1.4126798060929953]
High-resolution (HR) MRI scans obtained from research-grade medical centers provide precise information about imaged tissues.
routine clinical MRI scans are typically in low-resolution (LR)
End-to-end deep learning methods for MRI super-resolution (SR) have been proposed, but they require re-training each time there is a shift in the input distribution.
We propose a novel approach that leverages a state-of-the-art 3D brain generative model, the latent diffusion model (LDM) trained on UK BioBank.
arXiv Detail & Related papers (2023-08-23T23:04:42Z) - Dual Arbitrary Scale Super-Resolution for Multi-Contrast MRI [23.50915512118989]
Multi-contrast Super-Resolution (SR) reconstruction is promising to yield SR images with higher quality.
radiologists are accustomed to zooming the MR images at arbitrary scales rather than using a fixed scale.
We propose an implicit neural representations based dual-arbitrary multi-contrast MRI super-resolution method, called Dual-ArbNet.
arXiv Detail & Related papers (2023-07-05T14:43:26Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Single MR Image Super-Resolution using Generative Adversarial Network [0.696125353550498]
Real Enhanced Super Resolution Generative Adrial Network (Real-ESRGAN) is one of the recent effective approaches utilized to produce higher resolution images.
In this paper, we apply this method to enhance the spatial resolution of 2D MR images.
arXiv Detail & Related papers (2022-07-16T23:15:10Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - SRR-Net: A Super-Resolution-Involved Reconstruction Method for High
Resolution MR Imaging [7.42807471627113]
The proposed SRR-Net is capable of recovering high-resolution brain images with both good visual quality and perceptual quality.
Experiment results using in-vivo HR multi-coil brain data indicate that the proposed SRR-Net is capable of recovering high-resolution brain images.
arXiv Detail & Related papers (2021-04-13T02:19:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.