CT-SGAN: Computed Tomography Synthesis GAN
- URL: http://arxiv.org/abs/2110.09288v1
- Date: Thu, 14 Oct 2021 22:20:40 GMT
- Title: CT-SGAN: Computed Tomography Synthesis GAN
- Authors: Ahmad Pesaranghader, Yiping Wang, and Mohammad Havaei
- Abstract summary: We propose the CT-SGAN model that generates large-scale 3D synthetic CT-scan volumes when trained on a small dataset of chest CT-scans.
We show that CT-SGAN can significantly improve lung detection accuracy by pre-training a nodule on a vast amount of synthetic data.
- Score: 4.765541373485143
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Diversity in data is critical for the successful training of deep learning
models. Leveraged by a recurrent generative adversarial network, we propose the
CT-SGAN model that generates large-scale 3D synthetic CT-scan volumes ($\geq
224\times224\times224$) when trained on a small dataset of chest CT-scans.
CT-SGAN offers an attractive solution to two major challenges facing machine
learning in medical imaging: a small number of given i.i.d. training data, and
the restrictions around the sharing of patient data preventing to rapidly
obtain larger and more diverse datasets. We evaluate the fidelity of the
generated images qualitatively and quantitatively using various metrics
including Fr\'echet Inception Distance and Inception Score. We further show
that CT-SGAN can significantly improve lung nodule detection accuracy by
pre-training a classifier on a vast amount of synthetic data.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - Swin-Tempo: Temporal-Aware Lung Nodule Detection in CT Scans as Video
Sequences Using Swin Transformer-Enhanced UNet [2.7547288571938795]
We present an innovative model that harnesses the strengths of both convolutional neural networks and vision transformers.
Inspired by object detection in videos, we treat each 3D CT image as a video, individual slices as frames, and lung nodules as objects, enabling a time-series application.
arXiv Detail & Related papers (2023-10-05T07:48:55Z) - Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation [4.43162303545687]
Deep learning-based Generative Models have the potential to convert low-resolution CT images into high-resolution counterparts without long acquisition times and increased radiation exposure in thin-slice CT imaging.
procuring appropriate training data for these Super-Resolution (SR) models is challenging.
Previous SR research has simulated thick-slice CT images from thin-slice CT images to create training pairs.
We introduce a simple yet realistic method to generate thick CT images from thin-slice CT images, facilitating the creation of training pairs for SR algorithms.
arXiv Detail & Related papers (2023-07-02T11:09:08Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Self-Attention Generative Adversarial Network for Iterative
Reconstruction of CT Images [0.9208007322096533]
The aim of this study is to train a single neural network to reconstruct high-quality CT images from noisy or incomplete data.
The network includes a self-attention block to model long-range dependencies in the data.
Our approach is shown to have comparable overall performance to CIRCLE GAN, while outperforming the other two approaches.
arXiv Detail & Related papers (2021-12-23T19:20:38Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - Image Synthesis for Data Augmentation in Medical CT using Deep
Reinforcement Learning [31.677682150726383]
We show that our method bears high promise for generating novel and anatomically accurate high resolution CT images at large and diverse quantities.
Our approach is specifically designed to work with even small image datasets which is desirable given the often low amount of image data many researchers have available to them.
arXiv Detail & Related papers (2021-03-18T19:47:11Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia
Screening from CT Imaging [85.00066186644466]
We propose a Multi-task Multi-slice Deep Learning System (M3Lung-Sys) for multi-class lung pneumonia screening from CT imaging.
In addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3 Lung-Sys also be able to locate the areas of relevant lesions.
arXiv Detail & Related papers (2020-10-07T06:22:24Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.