STAN-CT: Standardizing CT Image using Generative Adversarial Network
- URL: http://arxiv.org/abs/2004.01307v1
- Date: Thu, 2 Apr 2020 23:43:06 GMT
- Title: STAN-CT: Standardizing CT Image using Generative Adversarial Network
- Authors: Md Selim, Jie Zhang, Baowei Fei, Guo-Qiang Zhang and Jin Chen
- Abstract summary: We present an end-to-end solution called STAN-CT for CT image standardization and normalization.
STAN-CT consists of two components: 1) a novel Generative Adversarial Networks (GAN) model that is capable of effectively learning the data distribution of a standard imaging protocol with only a few rounds of generator training, and 2) an automatic DICOM reconstruction pipeline with systematic image quality control.
- Score: 10.660781755744312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computed tomography (CT) plays an important role in lung malignancy
diagnostics and therapy assessment and facilitating precision medicine
delivery. However, the use of personalized imaging protocols poses a challenge
in large-scale cross-center CT image radiomic studies. We present an end-to-end
solution called STAN-CT for CT image standardization and normalization, which
effectively reduces discrepancies in image features caused by using different
imaging protocols or using different CT scanners with the same imaging
protocol. STAN-CT consists of two components: 1) a novel Generative Adversarial
Networks (GAN) model that is capable of effectively learning the data
distribution of a standard imaging protocol with only a few rounds of generator
training, and 2) an automatic DICOM reconstruction pipeline with systematic
image quality control that ensure the generation of high-quality standard DICOM
images. Experimental results indicate that the training efficiency and model
performance of STAN-CT have been significantly improved compared to the
state-of-the-art CT image standardization and normalization algorithms.
Related papers
- Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Low-Dose CT Image Reconstruction by Fine-Tuning a UNet Pretrained for
Gaussian Denoising for the Downstream Task of Image Enhancement [3.7960472831772765]
Computed Tomography (CT) is a widely used medical imaging modality, and reconstruction from low-dose CT data is a challenging task.
In this paper, we propose a less complex two-stage method for reconstruction of LDCT images.
The proposed method achieves a shared top ranking in the LoDoPaB-CT challenge and a first position with respect to the SSIM metric.
arXiv Detail & Related papers (2024-03-06T08:51:09Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Latent Diffusion Model for Medical Image Standardization and Enhancement [11.295078152769559]
DiffusionCT is a score-based DDPM model that transforms disparate non-standard distributions into a standardized form.
The architecture comprises a U-Net-based encoder-decoder, augmented by a DDPM model integrated at the bottleneck position.
Empirical tests on patient CT images indicate notable improvements in image standardization using DiffusionCT.
arXiv Detail & Related papers (2023-10-08T17:11:14Z) - DiffusionCT: Latent Diffusion Model for CT Image Standardization [9.312998333278802]
Existing CT image harmonization models rely on GAN-based supervised or semi-supervised learning, with limited performance.
This work addresses the issue of CT image harmonization using a new diffusion-based model, named DiffusionCT, to standardize CT images acquired from different vendors and protocols.
Experiments demonstrate a significant improvement in the performance of the standardization task using DiffusionCT.
arXiv Detail & Related papers (2023-01-20T22:13:48Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - CT Image Harmonization for Enhancing Radiomics Studies [10.643230630935781]
RadiomicGAN is developed to mitigate the discrepancy caused by using non-standard reconstruction kernels.
A novel training approach, called Dynamic Window-based Training, has been developed to transform the pre-trained model to the medical imaging domain.
Model performance evaluated using 1401 radiomic features show that RadiomicGAN clearly outperforms the state-of-art image standardization models.
arXiv Detail & Related papers (2021-07-03T04:03:42Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.