ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography
- URL: http://arxiv.org/abs/2405.04629v2
- Date: Wed, 29 May 2024 02:12:44 GMT
- Title: ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography
- Authors: Syed Jamal Safdar Gardezi, Lucas Aronson, Peter Wawrzyn, Hongkun Yu, E. Jason Abel, Daniel D. Shapiro, Meghan G. Lubner, Joshua Warner, Giuseppe Toia, Lu Mao, Pallavi Tiwari, Andrew L. Wentland,
- Abstract summary: The ResNCT model successfully generated synthetic nephrographic images from non-contrast and urographic image inputs.
The model provides a means of eliminating the acquisition of the nephrographic phase with a resultant 33% reduction in radiation dose for CTU examinations.
- Score: 1.927688129012441
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Purpose: To develop and evaluate a transformer-based deep learning model for the synthesis of nephrographic phase images in CT urography (CTU) examinations from the unenhanced and urographic phases. Materials and Methods: This retrospective study was approved by the local Institutional Review Board. A dataset of 119 patients (mean $\pm$ SD age, 65 $\pm$ 12 years; 75/44 males/females) with three-phase CT urography studies was curated for deep learning model development. The three phases for each patient were aligned with an affine registration algorithm. A custom model, coined Residual transformer model for Nephrographic phase CT image synthesis (ResNCT), was developed and implemented with paired inputs of non-contrast and urographic sets of images trained to produce the nephrographic phase images, that were compared with the corresponding ground truth nephrographic phase images. The synthesized images were evaluated with multiple performance metrics, including peak signal to noise ratio (PSNR), structural similarity index (SSIM), normalized cross correlation coefficient (NCC), mean absolute error (MAE), and root mean squared error (RMSE). Results: The ResNCT model successfully generated synthetic nephrographic images from non-contrast and urographic image inputs. With respect to ground truth nephrographic phase images, the images synthesized by the model achieved high PSNR (27.8 $\pm$ 2.7 dB), SSIM (0.88 $\pm$ 0.05), and NCC (0.98 $\pm$ 0.02), and low MAE (0.02 $\pm$ 0.005) and RMSE (0.042 $\pm$ 0.016). Conclusion: The ResNCT model synthesized nephrographic phase CT images with high similarity to ground truth images. The ResNCT model provides a means of eliminating the acquisition of the nephrographic phase with a resultant 33% reduction in radiation dose for CTU examinations.
Related papers
- seg2med: a segmentation-based medical image generation framework using denoising diffusion probabilistic models [5.92914320764123]
seg2med is an advanced medical image synthesis framework.
It generates high-quality synthetic medical images conditioned on anatomical masks from TotalSegmentator.
arXiv Detail & Related papers (2025-04-12T11:32:32Z) - 3D Nephrographic Image Synthesis in CT Urography with the Diffusion Model and Swin Transformer [3.8557197729550485]
The proposed approach effectively synthesizes high-quality 3D nephrographic phase images.
It can be used to reduce radiation dose in CTU by 33.3% without compromising image quality.
arXiv Detail & Related papers (2025-02-26T23:22:31Z) - Synthetic CT image generation from CBCT: A Systematic Review [44.01505745127782]
Generation of synthetic CT (sCT) images from cone-beam CT (CBCT) data using deep learning methodologies represents a significant advancement in radiation oncology.
A total of 35 relevant studies were identified and analyzed, revealing the prevalence of deep learning approaches in the generation of sCT.
arXiv Detail & Related papers (2025-01-22T13:54:07Z) - Cycle-Constrained Adversarial Denoising Convolutional Network for PET Image Denoising: Multi-Dimensional Validation on Large Datasets with Reader Study and Real Low-Dose Data [9.160782425067712]
We propose a Cycle-versa Adrial Denoising Convolutional Network (Cycle-DCN) to reconstruct full-dose-quality images from low-dose scans.
Experiments were conducted on a large dataset consisting of raw PET brain data from 1,224 patients.
Cycle-DCN significantly improves average Peak Signal-to-Noise Ratio (PSNR), SSIM, and Normalized Root Mean Square Error (NRMSE) across three dose levels.
arXiv Detail & Related papers (2024-10-31T04:34:28Z) - Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation [2.4113205575263708]
We propose an image synthesis method that utilizes denoising diffusion probabilistic models (DDPMs) to automatically generate retinal optical coherence tomography ( OCT) images.
We observe a consistent improvement in layer segmentation accuracy, which is validated using various neural networks.
These findings demonstrate the promising potential of DDPMs in reducing the need for manual annotations of retinal OCT images.
arXiv Detail & Related papers (2023-11-09T16:09:24Z) - Synthetic CT Generation from MRI using 3D Transformer-based Denoising
Diffusion Model [2.232713445482175]
Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning.
We propose an MRI-to-CT transformer-based denoising diffusion probabilistic model (MC-DDPM) to transform MRI into high-quality sCT.
arXiv Detail & Related papers (2023-05-31T00:32:00Z) - Generalizable synthetic MRI with physics-informed convolutional networks [57.628770497971246]
We develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition.
We investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols.
arXiv Detail & Related papers (2023-05-21T21:16:20Z) - Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures [5.921808547303054]
Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
arXiv Detail & Related papers (2022-10-26T15:15:24Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - OCT-GAN: Single Step Shadow and Noise Removal from Optical Coherence
Tomography Images of the Human Optic Nerve Head [47.812972855826985]
We developed a single process that successfully removed both noise and retinal shadows from unseen single-frame B-scans within 10.4ms.
The proposed algorithm reduces the necessity for long image acquisition times, minimizes expensive hardware requirements and reduces motion artifacts in OCT images.
arXiv Detail & Related papers (2020-10-06T08:32:32Z) - SCREENet: A Multi-view Deep Convolutional Neural Network for
Classification of High-resolution Synthetic Mammographic Screening Scans [3.8137985834223502]
We develop and evaluate a multi-view deep learning approach to the analysis of high-resolution synthetic mammograms.
We assess the effect on accuracy of image resolution and training set size.
arXiv Detail & Related papers (2020-09-18T00:12:33Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.