Transformer and GAN Based Super-Resolution Reconstruction Network for
Medical Images
- URL: http://arxiv.org/abs/2212.13068v1
- Date: Mon, 26 Dec 2022 09:52:12 GMT
- Title: Transformer and GAN Based Super-Resolution Reconstruction Network for
Medical Images
- Authors: Weizhi Du and Harvery Tian
- Abstract summary: Super-resolution reconstruction in medical imaging has become more popular (MRI)
In this paper, we offer a deep learning-based strategy for reconstructing medical images from low resolutions utilizing Transformer and Generative Adversarial Networks (T-GAN)
The integrated system can extract more precise texture information and focus more on important locations through global image matching.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Because of the necessity to obtain high-quality images with minimal radiation
doses, such as in low-field magnetic resonance imaging, super-resolution
reconstruction in medical imaging has become more popular (MRI). However, due
to the complexity and high aesthetic requirements of medical imaging, image
super-resolution reconstruction remains a difficult challenge. In this paper,
we offer a deep learning-based strategy for reconstructing medical images from
low resolutions utilizing Transformer and Generative Adversarial Networks
(T-GAN). The integrated system can extract more precise texture information and
focus more on important locations through global image matching after
successfully inserting Transformer into the generative adversarial network for
picture reconstruction. Furthermore, we weighted the combination of content
loss, adversarial loss, and adversarial feature loss as the final multi-task
loss function during the training of our proposed model T-GAN. In comparison to
established measures like PSNR and SSIM, our suggested T-GAN achieves optimal
performance and recovers more texture features in super-resolution
reconstruction of MRI scanned images of the knees and belly.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Cross-Modality High-Frequency Transformer for MR Image Super-Resolution [100.50972513285598]
We build an early effort to build a Transformer-based MR image super-resolution framework.
We consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior.
We establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution images.
arXiv Detail & Related papers (2022-03-29T07:56:55Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - HUMUS-Net: Hybrid unrolled multi-scale network architecture for
accelerated MRI reconstruction [38.0542877099235]
HUMUS-Net is a hybrid architecture that combines the beneficial implicit bias and efficiency of convolutions with the power of Transformer blocks in an unrolled and multi-scale network.
Our network establishes new state of the art on the largest publicly available MRI dataset, the fastMRI dataset.
arXiv Detail & Related papers (2022-03-15T19:26:29Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Fine-grained MRI Reconstruction using Attentive Selection Generative
Adversarial Networks [0.0]
We propose a novel attention-based deep learning framework to provide high-quality MRI reconstruction.
We incorporate large-field contextual feature integration and attention selection in a generative adversarial network (GAN) framework.
arXiv Detail & Related papers (2021-03-13T09:58:32Z) - Reference-based Texture transfer for Single Image Super-resolution of
Magnetic Resonance images [1.978587235008588]
We propose a reference-based, unpaired multi-contrast texture-transfer strategy for deep learning based in-plane and across-plane MRI super-resolution.
We apply our scheme in different super-resolution architectures, observing improvement in PSNR and SSIM for 4x super-resolution in most of the cases.
arXiv Detail & Related papers (2021-02-10T14:12:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.