Free-form tumor synthesis in computed tomography images via richer
generative adversarial network
- URL: http://arxiv.org/abs/2104.09701v1
- Date: Tue, 20 Apr 2021 00:49:35 GMT
- Title: Free-form tumor synthesis in computed tomography images via richer
generative adversarial network
- Authors: Qiangguo Jin and Hui Cui and Changming Sun and Zhaopeng Meng and Ran
Su
- Abstract summary: We propose a new richer generative adversarial network for free-form 3D tumor/lesion synthesis in computed tomography (CT) images.
The network is composed of a new richer convolutional feature enhanced dilated-gated generator (RicherDG) and a hybrid loss function.
- Score: 25.20811195237978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The insufficiency of annotated medical imaging scans for cancer makes it
challenging to train and validate data-hungry deep learning models in precision
oncology. We propose a new richer generative adversarial network for free-form
3D tumor/lesion synthesis in computed tomography (CT) images. The network is
composed of a new richer convolutional feature enhanced dilated-gated generator
(RicherDG) and a hybrid loss function. The RicherDG has dilated-gated
convolution layers to enable tumor-painting and to enlarge perceptive fields;
and it has a novel richer convolutional feature association branch to recover
multi-scale convolutional features especially from uncertain boundaries between
tumor and surrounding healthy tissues. The hybrid loss function, which consists
of a diverse range of losses, is designed to aggregate complementary
information to improve optimization.
We perform a comprehensive evaluation of the synthesis results on a wide
range of public CT image datasets covering the liver, kidney tumors, and lung
nodules. The qualitative and quantitative evaluations and ablation study
demonstrated improved synthesizing results over advanced tumor synthesis
methods.
Related papers
- Enhancing Brain Tumor Classification Using TrAdaBoost and Multi-Classifier Deep Learning Approaches [0.0]
Brain tumors pose a serious health threat due to their rapid growth and potential for metastasis.
This study aims to improve the efficiency and accuracy of brain tumor classification.
Our approach combines state-of-the-art deep learning algorithms, including the Vision Transformer (ViT), Capsule Neural Network (CapsNet), and convolutional neural networks (CNNs) such as ResNet-152 and VGG16.
arXiv Detail & Related papers (2024-10-31T07:28:06Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - 3DGAUnet: 3D generative adversarial networks with a 3D U-Net based
generator to achieve the accurate and effective synthesis of clinical tumor
image data for pancreatic cancer [6.821916296001028]
We develop a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue.
Our innovation is to develop a 3D U-Net architecture for the generator to improve shape and texture learning for PDAC tumors and pancreatic tissue.
arXiv Detail & Related papers (2023-11-09T19:10:28Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Generative Adversarial Networks for Brain Images Synthesis: A Review [2.609784101826762]
In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality)
generative adversarial network (GAN) as one of the most popular generative-based deep learning methods.
We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
arXiv Detail & Related papers (2023-05-16T17:28:06Z) - Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule
Augmentation and Detection [52.93342510469636]
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers.
Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR.
To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation.
arXiv Detail & Related papers (2022-07-19T16:38:48Z) - Multi-modal learning for predicting the genotype of glioma [14.93152817415408]
The isocitrate dehydrogenase (IDH) gene mutation is an essential biomarker for the diagnosis and prognosis of glioma.
It is promising to better predict glioma genotype by integrating focal tumor image and geometric features with brain network features derived from MRI.
We propose a multi-modal learning framework using three separate encoders to extract features of focal tumor image, tumor geometrics and global brain networks.
arXiv Detail & Related papers (2022-03-21T10:20:04Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z) - A Novel and Efficient Tumor Detection Framework for Pancreatic Cancer
via CT Images [21.627818410241552]
A novel and efficient pancreatic tumor detection framework is proposed in this paper.
The contribution of the proposed method mainly consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation Module.
Experimental results achieve competitive performance in detection with the AUC of 0.9455, which outperforms other state-of-the-art methods to our best of knowledge.
arXiv Detail & Related papers (2020-02-11T15:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.