SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks
- URL: http://arxiv.org/abs/2106.02599v1
- Date: Fri, 4 Jun 2021 16:59:23 GMT
- Title: SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks
- Authors: Kuan Zhang, Haoji Hu, Kenneth Philbrick, Gian Marco Conte, Joseph D.
Sobek, Pouria Rouzrokh, Bradley J. Erickson
- Abstract summary: We propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN)
Our model shows promise as a novel 3D SR technique, providing potential applications in both clinical and research settings.
- Score: 9.201328999176402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is a growing demand for high-resolution (HR) medical images in both the
clinical and research applications. Image quality is inevitably traded off with
the acquisition time for better patient comfort, lower examination costs, dose,
and fewer motion-induced artifacts. For many image-based tasks, increasing the
apparent resolution in the perpendicular plane to produce multi-planar
reformats or 3D images is commonly used. Single image super-resolution (SR) is
a promising technique to provide HR images based on unsupervised learning to
increase resolution of a 2D image, but there are few reports on 3D SR. Further,
perceptual loss is proposed in the literature to better capture the textual
details and edges than using pixel-wise loss functions, by comparing the
semantic distances in the high-dimensional feature space of a pre-trained 2D
network (e.g., VGG). However, it is not clear how one should generalize it to
3D medical images, and the attendant implications are still unclear. In this
paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using
Perceptual-tuned Generative Adversarial Network (GAN), in order to produce
thinner slice (e.g., high resolution in the 'Z' plane) medical images with
anti-aliasing and deblurring. The proposed method outperforms other
conventional resolution-enhancement methods and previous SR work on medical
images upon both qualitative and quantitative comparisons. Specifically, we
examine the model in terms of its generalization for various SR ratios and
imaging modalities. By addressing those limitations, our model shows promise as
a novel 3D SR interpolation technique, providing potential applications in both
clinical and research settings.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Inter-slice Super-resolution of Magnetic Resonance Images by Pre-training and Self-supervised Fine-tuning [49.197385954021456]
In clinical practice, 2D magnetic resonance (MR) sequences are widely adopted. While individual 2D slices can be stacked to form a 3D volume, the relatively large slice spacing can pose challenges for visualization and subsequent analysis tasks.
To reduce slice spacing, deep-learning-based super-resolution techniques are widely investigated.
Most current solutions require a substantial number of paired high-resolution and low-resolution images for supervised training, which are typically unavailable in real-world scenarios.
arXiv Detail & Related papers (2024-06-10T02:20:26Z) - Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - SdCT-GAN: Reconstructing CT from Biplanar X-Rays with Self-driven
Generative Adversarial Networks [6.624839896733912]
This paper presents a new self-driven generative adversarial network model (SdCT-GAN) for reconstruction of 3D CT images.
It is motivated to pay more attention to image details by introducing a novel auto-encoder structure in the discriminator.
LPIPS evaluation metric is adopted that can quantitatively evaluate the fine contours and textures of reconstructed images better than the existing ones.
arXiv Detail & Related papers (2023-09-10T08:16:02Z) - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models [52.529394863331326]
We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
arXiv Detail & Related papers (2023-03-15T08:28:06Z) - Single MR Image Super-Resolution using Generative Adversarial Network [0.696125353550498]
Real Enhanced Super Resolution Generative Adrial Network (Real-ESRGAN) is one of the recent effective approaches utilized to produce higher resolution images.
In this paper, we apply this method to enhance the spatial resolution of 2D MR images.
arXiv Detail & Related papers (2022-07-16T23:15:10Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos [107.36352212367179]
We propose RSC-Net, which consists of a Resolution-aware network, a Self-supervision loss, and a Contrastive learning scheme.
The proposed method is able to learn 3D body pose and shape across different resolutions with one single model.
We extend the RSC-Net to handle low-resolution videos and apply it to reconstruct textured 3D pedestrians from low-resolution input.
arXiv Detail & Related papers (2021-03-11T06:52:12Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z) - HRINet: Alternative Supervision Network for High-resolution CT image
Interpolation [3.7966959476339035]
We propose a novel network, High Resolution Interpolation Network (HRINet), aiming at producing high-resolution CT images.
We combine the idea of ACAI and GANs, and propose a novel idea of alternative supervision method by applying supervised and unsupervised training.
Our experiments show the great improvement on 256 2 and 5122 images quantitatively and qualitatively.
arXiv Detail & Related papers (2020-02-11T15:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.