3D Volumetric Super-Resolution in Radiology Using 3D RRDB-GAN
- URL: http://arxiv.org/abs/2402.04171v1
- Date: Tue, 6 Feb 2024 17:26:18 GMT
- Title: 3D Volumetric Super-Resolution in Radiology Using 3D RRDB-GAN
- Authors: Juhyung Ha, Nian Wang, Surendra Maharjan, Xuhong Zhang
- Abstract summary: This study introduces the 3D Residual-in-Residual Block GAN (3D RRDB-GAN) for 3D super-resolution for radiology imagery.
A key aspect of 3D RRDB-GAN is the integration of a 2.5D Dense loss function, which contributes to improved volumetric image quality and realism.
- Score: 4.8698443014985715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces the 3D Residual-in-Residual Dense Block GAN (3D
RRDB-GAN) for 3D super-resolution for radiology imagery. A key aspect of 3D
RRDB-GAN is the integration of a 2.5D perceptual loss function, which
contributes to improved volumetric image quality and realism. The effectiveness
of our model was evaluated through 4x super-resolution experiments across
diverse datasets, including Mice Brain MRH, OASIS, HCP1200, and MSD-Task-6.
These evaluations, encompassing both quantitative metrics like LPIPS and FID
and qualitative assessments through sample visualizations, demonstrate the
models effectiveness in detailed image analysis. The 3D RRDB-GAN offers a
significant contribution to medical imaging, particularly by enriching the
depth, clarity, and volumetric detail of medical images. Its application shows
promise in enhancing the interpretation and analysis of complex medical imagery
from a comprehensive 3D perspective.
Related papers
- Diff3Dformer: Leveraging Slice Sequence Diffusion for Enhanced 3D CT Classification with Transformer Networks [5.806035963947936]
We propose a Diffusion-based 3D Vision Transformer (Diff3Dformer) to aggregate repetitive information within 3D CT scans.
Our method exhibits improved performance on two different scales of small datasets of 3D lung CT scans.
arXiv Detail & Related papers (2024-06-24T23:23:18Z) - Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - MinD-3D: Reconstruct High-quality 3D objects in Human Brain [50.534007259536715]
Recon3DMind is an innovative task aimed at reconstructing 3D visuals from Functional Magnetic Resonance Imaging (fMRI) signals.
We present the fMRI-Shape dataset, which includes data from 14 participants and features 360-degree videos of 3D objects.
We propose MinD-3D, a novel and effective three-stage framework specifically designed to decode the brain's 3D visual information from fMRI signals.
arXiv Detail & Related papers (2023-12-12T18:21:36Z) - 3D-MIR: A Benchmark and Empirical Study on 3D Medical Image Retrieval in
Radiology [6.851500027718433]
The field of 3D medical image retrieval is still emerging, lacking established evaluation benchmarks, comprehensive datasets, and thorough studies.
This paper introduces a novel benchmark for 3D Medical Image Retrieval (3D-MIR) that encompasses four different anatomies imaged with computed tomography.
Using this benchmark, we explore a diverse set of search strategies that use aggregated 2D slices, 3D volumes, and multi-modal embeddings from popular multi-modal foundation models as queries.
arXiv Detail & Related papers (2023-11-23T00:57:35Z) - Improving 3D Imaging with Pre-Trained Perpendicular 2D Diffusion Models [52.529394863331326]
We propose a novel approach using two perpendicular pre-trained 2D diffusion models to solve the 3D inverse problem.
Our method is highly effective for 3D medical image reconstruction tasks, including MRI Z-axis super-resolution, compressed sensing MRI, and sparse-view CT.
arXiv Detail & Related papers (2023-03-15T08:28:06Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Deep Volumetric Universal Lesion Detection using Light-Weight Pseudo 3D
Convolution and Surface Point Regression [23.916776570010285]
Computer-aided lesion/significant-findings detection techniques are at the core of medical imaging.
We propose a novel deep anchor-free one-stage VULD framework that incorporates (1) P3DC operators to recycle the architectural configurations and pre-trained weights from the off-the-shelf 2D networks.
New SPR method to effectively regress the 3D lesion spatial extents by pinpointing their representative key points on lesion surfaces.
arXiv Detail & Related papers (2020-08-30T19:42:06Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.