Using a Generative Adversarial Network for CT Normalization and its
Impact on Radiomic Features
- URL: http://arxiv.org/abs/2001.08741v1
- Date: Wed, 22 Jan 2020 23:41:29 GMT
- Title: Using a Generative Adversarial Network for CT Normalization and its
Impact on Radiomic Features
- Authors: Leihao Wei and Yannan Lin and William Hsu
- Abstract summary: Radiomic features are sensitive to differences in acquisitions due to variations in dose levels and slice thickness.
A 3D generative adversarial network (GAN) was used to normalize reduced dose, thick slice (2.0mm) images to normal dose (100%), thinner slice (1.0mm) images.
- Score: 3.4548443472506194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer-Aided-Diagnosis (CADx) systems assist radiologists with identifying
and classifying potentially malignant pulmonary nodules on chest CT scans using
morphology and texture-based (radiomic) features. However, radiomic features
are sensitive to differences in acquisitions due to variations in dose levels
and slice thickness. This study investigates the feasibility of generating a
normalized scan from heterogeneous CT scans as input. We obtained projection
data from 40 low-dose chest CT scans, simulating acquisitions at 10%, 25% and
50% dose and reconstructing the scans at 1.0mm and 2.0mm slice thickness. A 3D
generative adversarial network (GAN) was used to simultaneously normalize
reduced dose, thick slice (2.0mm) images to normal dose (100%), thinner slice
(1.0mm) images. We evaluated the normalized image quality using peak
signal-to-noise ratio (PSNR), structural similarity index (SSIM) and Learned
Perceptual Image Patch Similarity (LPIPS). Our GAN improved perceptual
similarity by 35%, compared to a baseline CNN method. Our analysis also shows
that the GAN-based approach led to a significantly smaller error (p-value <
0.05) in nine studied radiomic features. These results indicated that GANs
could be used to normalize heterogeneous CT images and reduce the variability
in radiomic feature values.
Related papers
- Deep-Motion-Net: GNN-based volumetric organ shape reconstruction from single-view 2D projections [1.8189671456038365]
We propose an end-to-end graph neural network architecture that enables 3D organ shape reconstruction during radiotherapy.
The proposed model learns the mesh regression from a patient-specific template and deep features extracted from kV images at arbitrary projection angles.
Overall framework was tested quantitatively on synthetic respiratory motion scenarios and qualitatively on in-treatment images acquired over full scan series for liver cancer patients.
arXiv Detail & Related papers (2024-07-09T09:07:18Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures [5.921808547303054]
Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
arXiv Detail & Related papers (2022-10-26T15:15:24Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Generative Models Improve Radiomics Performance in Different Tasks and
Different Datasets: An Experimental Study [3.040206021972938]
Radiomics is an area of research focusing on high throughput feature extraction from medical images.
Generative models can improve the performance of low dose CT-based radiomics in different tasks.
arXiv Detail & Related papers (2021-09-06T06:01:21Z) - Label-Free Segmentation of COVID-19 Lesions in Lung CT [17.639558085838583]
We present a label-free approach for segmenting COVID-19 lesions in CT via pixel-level anomaly modeling.
Our modeling is inspired by the observation that the parts of tracheae and vessels, which lay in the high-intensity range where lesions belong, exhibit strong patterns.
Our experiments on three different datasets validate the effectiveness of NormNet.
arXiv Detail & Related papers (2020-09-08T12:38:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.