3D Nephrographic Image Synthesis in CT Urography with the Diffusion Model and Swin Transformer
- URL: http://arxiv.org/abs/2502.19623v1
- Date: Wed, 26 Feb 2025 23:22:31 GMT
- Title: 3D Nephrographic Image Synthesis in CT Urography with the Diffusion Model and Swin Transformer
- Authors: Hongkun Yu, Syed Jamal Safdar Gardezi, E. Jason Abel, Daniel Shapiro, Meghan G. Lubner, Joshua Warner, Matthew Smith, Giuseppe Toia, Lu Mao, Pallavi Tiwari, Andrew L. Wentland,
- Abstract summary: The proposed approach effectively synthesizes high-quality 3D nephrographic phase images.<n>It can be used to reduce radiation dose in CTU by 33.3% without compromising image quality.
- Score: 3.8557197729550485
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Purpose: This study aims to develop and validate a method for synthesizing 3D nephrographic phase images in CT urography (CTU) examinations using a diffusion model integrated with a Swin Transformer-based deep learning approach. Materials and Methods: This retrospective study was approved by the local Institutional Review Board. A dataset comprising 327 patients who underwent three-phase CTU (mean $\pm$ SD age, 63 $\pm$ 15 years; 174 males, 153 females) was curated for deep learning model development. The three phases for each patient were aligned with an affine registration algorithm. A custom deep learning model coined dsSNICT (diffusion model with a Swin transformer for synthetic nephrographic phase images in CT) was developed and implemented to synthesize the nephrographic images. Performance was assessed using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Absolute Error (MAE), and Fr\'{e}chet Video Distance (FVD). Qualitative evaluation by two fellowship-trained abdominal radiologists was performed. Results: The synthetic nephrographic images generated by our proposed approach achieved high PSNR (26.3 $\pm$ 4.4 dB), SSIM (0.84 $\pm$ 0.069), MAE (12.74 $\pm$ 5.22 HU), and FVD (1323). Two radiologists provided average scores of 3.5 for real images and 3.4 for synthetic images (P-value = 0.5) on a Likert scale of 1-5, indicating that our synthetic images closely resemble real images. Conclusion: The proposed approach effectively synthesizes high-quality 3D nephrographic phase images. This model can be used to reduce radiation dose in CTU by 33.3\% without compromising image quality, which thereby enhances the safety and diagnostic utility of CT urography.
Related papers
- Synthetic CT image generation from CBCT: A Systematic Review [44.01505745127782]
Generation of synthetic CT (sCT) images from cone-beam CT (CBCT) data using deep learning methodologies represents a significant advancement in radiation oncology.<n>A total of 35 relevant studies were identified and analyzed, revealing the prevalence of deep learning approaches in the generation of sCT.
arXiv Detail & Related papers (2025-01-22T13:54:07Z) - ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography [1.927688129012441]
The ResNCT model successfully generated synthetic nephrographic images from non-contrast and urographic image inputs.
The model provides a means of eliminating the acquisition of the nephrographic phase with a resultant 33% reduction in radiation dose for CTU examinations.
arXiv Detail & Related papers (2024-05-07T19:20:32Z) - Deep-Learning-based Fast and Accurate 3D CT Deformable Image
Registration in Lung Cancer [14.31661366393592]
The visibility of the tumor is limited since the patient's 3D anatomy is projected onto a 2D plane.
A solution is to reconstruct the 3D CT image from the kV images obtained at the treatment isocenter in the treatment position.
A patient-specific vision-transformer-based network was developed and shown to be accurate and efficient.
arXiv Detail & Related papers (2023-04-21T17:18:21Z) - Validated respiratory drug deposition predictions from 2D and 3D medical
images with statistical shape models and convolutional neural networks [47.187609203210705]
We aim to develop and validate an automated computational framework for patient-specific deposition modelling.
An image processing approach is proposed that could produce 3D patient respiratory geometries from 2D chest X-rays and 3D CT images.
arXiv Detail & Related papers (2023-03-02T07:47:07Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - A framework for quantitative analysis of Computed Tomography images of
viral pneumonitis: radiomic features in COVID and non-COVID patients [0.0]
1028 chest CT image of patients with positive swab were segmented automatically for lung extraction.
A Gaussian model was applied to calculate quantitative metrics (QM) describing well-aerated and ill portions of the lungs.
Radiomic features (RF) of first and second order were extracted from bilateral lungs.
Four artificial intelligence-based models for classifying patients with COVID and non-COVID viral pneumonia were developed.
arXiv Detail & Related papers (2021-09-28T15:22:24Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Cephalogram Synthesis and Landmark Detection in Dental Cone-Beam CT
Systems [11.242436948609715]
We propose a sigmoid-based intensity transform that uses the nonlinear optical property of X-ray films to increase image contrast of synthetic cephalograms.
For low dose purpose, the pixel-to-pixel generative adversarial network (pix2pixGAN) is proposed for 2D cephalogram synthesis directly from two CBCT projections.
For landmark detection in the synthetic cephalograms, an efficient automatic landmark detection method using the combination of LeNet-5 and ResNet50 is proposed.
arXiv Detail & Related papers (2020-09-09T17:06:54Z) - Using a Generative Adversarial Network for CT Normalization and its
Impact on Radiomic Features [3.4548443472506194]
Radiomic features are sensitive to differences in acquisitions due to variations in dose levels and slice thickness.
A 3D generative adversarial network (GAN) was used to normalize reduced dose, thick slice (2.0mm) images to normal dose (100%), thinner slice (1.0mm) images.
arXiv Detail & Related papers (2020-01-22T23:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.