Free-form tumor synthesis in computed tomography images via richer
generative adversarial network
- URL: http://arxiv.org/abs/2104.09701v1
- Date: Tue, 20 Apr 2021 00:49:35 GMT
- Title: Free-form tumor synthesis in computed tomography images via richer
generative adversarial network
- Authors: Qiangguo Jin and Hui Cui and Changming Sun and Zhaopeng Meng and Ran
Su
- Abstract summary: We propose a new richer generative adversarial network for free-form 3D tumor/lesion synthesis in computed tomography (CT) images.
The network is composed of a new richer convolutional feature enhanced dilated-gated generator (RicherDG) and a hybrid loss function.
- Score: 25.20811195237978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The insufficiency of annotated medical imaging scans for cancer makes it
challenging to train and validate data-hungry deep learning models in precision
oncology. We propose a new richer generative adversarial network for free-form
3D tumor/lesion synthesis in computed tomography (CT) images. The network is
composed of a new richer convolutional feature enhanced dilated-gated generator
(RicherDG) and a hybrid loss function. The RicherDG has dilated-gated
convolution layers to enable tumor-painting and to enlarge perceptive fields;
and it has a novel richer convolutional feature association branch to recover
multi-scale convolutional features especially from uncertain boundaries between
tumor and surrounding healthy tissues. The hybrid loss function, which consists
of a diverse range of losses, is designed to aggregate complementary
information to improve optimization.
We perform a comprehensive evaluation of the synthesis results on a wide
range of public CT image datasets covering the liver, kidney tumors, and lung
nodules. The qualitative and quantitative evaluations and ablation study
demonstrated improved synthesizing results over advanced tumor synthesis
methods.
Related papers
- Evaluation of Vision Transformers for Multimodal Image Classification: A Case Study on Brain, Lung, and Kidney Tumors [0.0]
This work evaluates the performance of Vision Transformers architectures, including Swin Transformer and MaxViT, in several datasets.
We used three training sets of images with brain, lung, and kidney tumors.
Swin Transformer provided high accuracy, achieving up to 99.9% for kidney tumor classification and 99.3% accuracy in a combined dataset.
arXiv Detail & Related papers (2025-02-08T10:35:51Z) - VisionLLM-based Multimodal Fusion Network for Glottic Carcinoma Early Detection [3.0755269719204064]
We propose a vision large language model-based (VisionLLM-based) multimodal fusion network for glottic carcinoma detection, known as MMGC-Net.
We leverage an image encoder and additional Q-Former to extract vision embeddings and the Large Language Model Meta AI (Llama3) to obtain text embeddings.
These modalities are then integrated through a laryngeal feature fusion block, enabling a comprehensive integration of image and text features, thereby improving the glottic carcinoma identification performance.
arXiv Detail & Related papers (2024-12-24T03:19:29Z) - Enhancing Brain Tumor Classification Using TrAdaBoost and Multi-Classifier Deep Learning Approaches [0.0]
Brain tumors pose a serious health threat due to their rapid growth and potential for metastasis.
This study aims to improve the efficiency and accuracy of brain tumor classification.
Our approach combines state-of-the-art deep learning algorithms, including the Vision Transformer (ViT), Capsule Neural Network (CapsNet), and convolutional neural networks (CNNs) such as ResNet-152 and VGG16.
arXiv Detail & Related papers (2024-10-31T07:28:06Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - 3DGAUnet: 3D generative adversarial networks with a 3D U-Net based
generator to achieve the accurate and effective synthesis of clinical tumor
image data for pancreatic cancer [6.821916296001028]
We develop a new GAN-based model, named 3DGAUnet, for generating realistic 3D CT images of PDAC tumors and pancreatic tissue.
Our innovation is to develop a 3D U-Net architecture for the generator to improve shape and texture learning for PDAC tumors and pancreatic tissue.
arXiv Detail & Related papers (2023-11-09T19:10:28Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Generative Adversarial Networks for Brain Images Synthesis: A Review [2.609784101826762]
In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality)
generative adversarial network (GAN) as one of the most popular generative-based deep learning methods.
We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
arXiv Detail & Related papers (2023-05-16T17:28:06Z) - Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule
Augmentation and Detection [52.93342510469636]
Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers.
Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR.
To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation.
arXiv Detail & Related papers (2022-07-19T16:38:48Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Diffusion-Weighted Magnetic Resonance Brain Images Generation with
Generative Adversarial Networks and Variational Autoencoders: A Comparison
Study [55.78588835407174]
We show that high quality, diverse and realistic-looking diffusion-weighted magnetic resonance images can be synthesized using deep generative models.
We present two networks, the Introspective Variational Autoencoder and the Style-Based GAN, that qualify for data augmentation in the medical field.
arXiv Detail & Related papers (2020-06-24T18:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.