FA-GAN: Fused Attentive Generative Adversarial Networks for MRI Image
Super-Resolution
- URL: http://arxiv.org/abs/2108.03920v1
- Date: Mon, 9 Aug 2021 10:21:39 GMT
- Title: FA-GAN: Fused Attentive Generative Adversarial Networks for MRI Image
Super-Resolution
- Authors: Mingfeng Jiang, Minghao Zhi, Liying Wei, Xiaocheng Yang, Jucheng
Zhang, Yongming Li, Pin Wang, Jiahao Huang, Guang Yang
- Abstract summary: A framework called the Fused Attentive Generative Adversarial Networks(FA-GAN) is proposed to generate the super-resolution magnetic resonance image.
40 sets of 3D magnetic resonance images are used to train the network, and 10 sets of images are used to test the proposed method.
- Score: 8.778205385041549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-resolution magnetic resonance images can provide fine-grained anatomical
information, but acquiring such data requires a long scanning time. In this
paper, a framework called the Fused Attentive Generative Adversarial
Networks(FA-GAN) is proposed to generate the super-resolution MR image from
low-resolution magnetic resonance images, which can reduce the scanning time
effectively but with high resolution MR images. In the framework of the FA-GAN,
the local fusion feature block, consisting of different three-pass networks by
using different convolution kernels, is proposed to extract image features at
different scales. And the global feature fusion module, including the channel
attention module, the self-attention module, and the fusion operation, is
designed to enhance the important features of the MR image. Moreover, the
spectral normalization process is introduced to make the discriminator network
stable. 40 sets of 3D magnetic resonance images (each set of images contains
256 slices) are used to train the network, and 10 sets of images are used to
test the proposed method. The experimental results show that the PSNR and SSIM
values of the super-resolution magnetic resonance image generated by the
proposed FA-GAN method are higher than the state-of-the-art reconstruction
methods.
Related papers
- Learning Two-factor Representation for Magnetic Resonance Image Super-resolution [1.294284364022674]
We propose a novel method for MR image super-resolution based on two-factor representation.
Specifically, we factorize intensity signals into a linear combination of learnable basis and coefficient factors.
Our method achieves state-of-the-art performance, providing superior visual fidelity and robustness.
arXiv Detail & Related papers (2024-09-15T13:32:24Z) - Dual Arbitrary Scale Super-Resolution for Multi-Contrast MRI [23.50915512118989]
Multi-contrast Super-Resolution (SR) reconstruction is promising to yield SR images with higher quality.
radiologists are accustomed to zooming the MR images at arbitrary scales rather than using a fixed scale.
We propose an implicit neural representations based dual-arbitrary multi-contrast MRI super-resolution method, called Dual-ArbNet.
arXiv Detail & Related papers (2023-07-05T14:43:26Z) - Flexible Alignment Super-Resolution Network for Multi-Contrast MRI [7.727046305845654]
Super-Resolution plays a crucial role in preprocessing the low-resolution images for more precise medical analysis.
We propose the Flexible Alignment Super-Resolution Network (FASR-Net) for multi-contrast magnetic resonance images Super-Resolution.
arXiv Detail & Related papers (2022-10-07T11:07:20Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - High-Resolution Pelvic MRI Reconstruction Using a Generative Adversarial
Network with Attention and Cyclic Loss [3.4358954898228604]
Super-resolution methods have shown excellent performance in accelerating MRI.
In some circumstances, it is difficult to obtain high-resolution images even with prolonged scan time.
We proposed a novel super-resolution method that uses a generative adversarial network (GAN) with cyclic loss and attention mechanism.
arXiv Detail & Related papers (2021-07-21T10:07:22Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z) - Fine Perceptive GANs for Brain MR Image Super-Resolution in Wavelet
Domain [23.23392380531189]
Fine perceptive generative adversarial networks (FP-GANs) are proposed to produce high-resolution (HR) magnetic resonance (MR) images.
Experiments on MultiRes_7T dataset demonstrate that FP-GANs outperforms the competing methods quantitatively and qualitatively.
arXiv Detail & Related papers (2020-11-09T02:09:44Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.