3DGAUnet: 3D generative adversarial networks with a 3D U-Net based
generator to achieve the accurate and effective synthesis of clinical tumor
image data for pancreatic cancer
- URL: http://arxiv.org/abs/2311.05697v2
- Date: Mon, 27 Nov 2023 15:08:03 GMT
- Title: 3DGAUnet: 3D generative adversarial networks with a 3D U-Net based
generator to achieve the accurate and effective synthesis of clinical tumor
image data for pancreatic cancer
- Authors: Yu Shi, Hannah Tang, Michael Baine, Michael A. Hollingsworth, Huijing
Du, Dandan Zheng, Chi Zhang, Hongfeng Yu
- Abstract summary: We develop a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue.
Our innovation is to develop a 3D U-Net architecture for the generator to improve shape and texture learning for PDAC tumors and pancreatic tissue.
- Score: 6.821916296001028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pancreatic ductal adenocarcinoma (PDAC) presents a critical global health
challenge, and early detection is crucial for improving the 5-year survival
rate. Recent medical imaging and computational algorithm advances offer
potential solutions for early diagnosis. Deep learning, particularly in the
form of convolutional neural networks (CNNs), has demonstrated success in
medical image analysis tasks, including classification and segmentation.
However, the limited availability of clinical data for training purposes
continues to provide a significant obstacle. Data augmentation, generative
adversarial networks (GANs), and cross-validation are potential techniques to
address this limitation and improve model performance, but effective solutions
are still rare for 3D PDAC, where contrast is especially poor owing to the high
heterogeneity in both tumor and background tissues. In this study, we developed
a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of
PDAC tumors and pancreatic tissue, which can generate the interslice connection
data that the existing 2D CT image synthesis models lack. Our innovation is to
develop a 3D U-Net architecture for the generator to improve shape and texture
learning for PDAC tumors and pancreatic tissue. Our approach offers a promising
path to tackle the urgent requirement for creative and synergistic methods to
combat PDAC. The development of this GAN-based model has the potential to
alleviate data scarcity issues, elevate the quality of synthesized data, and
thereby facilitate the progression of deep learning models to enhance the
accuracy and early detection of PDAC tumors, which could profoundly impact
patient outcomes. Furthermore, this model has the potential to be adapted to
other types of solid tumors, hence making significant contributions to the
field of medical imaging in terms of image processing models.
Related papers
- Enhancing Brain Tumor Classification Using TrAdaBoost and Multi-Classifier Deep Learning Approaches [0.0]
Brain tumors pose a serious health threat due to their rapid growth and potential for metastasis.
This study aims to improve the efficiency and accuracy of brain tumor classification.
Our approach combines state-of-the-art deep learning algorithms, including the Vision Transformer (ViT), Capsule Neural Network (CapsNet), and convolutional neural networks (CNNs) such as ResNet-152 and VGG16.
arXiv Detail & Related papers (2024-10-31T07:28:06Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Cancer-Net PCa-Gen: Synthesis of Realistic Prostate Diffusion Weighted
Imaging Data via Anatomic-Conditional Controlled Latent Diffusion [68.45407109385306]
In Canada, prostate cancer is the most common form of cancer in men and accounted for 20% of new cancer cases for this demographic in 2022.
There has been significant interest in the development of deep neural networks for prostate cancer diagnosis, prognosis, and treatment planning using diffusion weighted imaging (DWI) data.
In this study, we explore the efficacy of latent diffusion for generating realistic prostate DWI data through the introduction of an anatomic-conditional controlled latent diffusion strategy.
arXiv Detail & Related papers (2023-11-30T15:11:03Z) - Slice-level Detection of Intracranial Hemorrhage on CT Using Deep
Descriptors of Adjacent Slices [0.31317409221921133]
We propose a new strategy to train emphslice-level classifiers on CT scans based on the descriptors of the adjacent slices along the axis.
We obtain a single model in the top 4% best-performing solutions of the RSNA Intracranial Hemorrhage dataset challenge.
The proposed method is general and can be applied to other 3D medical diagnosis tasks such as MRI imaging.
arXiv Detail & Related papers (2022-08-05T23:20:37Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - ROCT-Net: A new ensemble deep convolutional model with improved spatial
resolution learning for detecting common diseases from retinal OCT images [0.0]
This paper presents a new enhanced deep ensemble convolutional neural network for detecting retinal diseases from OCT images.
Our model generates rich and multi-resolution features by employing the learning architectures of two robust convolutional models.
Our experiments on two datasets and comparing our model with some other well-known deep convolutional neural networks have proven that our architecture can increase the classification accuracy up to 5%.
arXiv Detail & Related papers (2022-03-03T17:51:01Z) - A Point Cloud Generative Model via Tree-Structured Graph Convolutions
for 3D Brain Shape Reconstruction [31.436531681473753]
It is almost impossible to obtain the intraoperative 3D shape information by using physical methods such as sensor scanning.
In this paper, a general generative adversarial network (GAN) architecture is proposed to reconstruct the 3D point clouds (PCs) of brains by using one single 2D image.
arXiv Detail & Related papers (2021-07-21T07:57:37Z) - Free-form tumor synthesis in computed tomography images via richer
generative adversarial network [25.20811195237978]
We propose a new richer generative adversarial network for free-form 3D tumor/lesion synthesis in computed tomography (CT) images.
The network is composed of a new richer convolutional feature enhanced dilated-gated generator (RicherDG) and a hybrid loss function.
arXiv Detail & Related papers (2021-04-20T00:49:35Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.