RADIOGAN: Deep Convolutional Conditional Generative adversarial Network
To Generate PET Images
- URL: http://arxiv.org/abs/2003.08663v1
- Date: Thu, 19 Mar 2020 10:14:40 GMT
- Title: RADIOGAN: Deep Convolutional Conditional Generative adversarial Network
To Generate PET Images
- Authors: Amine Amyar, Su Ruan, Pierre Vera, Pierre Decazes, and Romain
Modzelewski
- Abstract summary: We propose a deep convolutional conditional generative adversarial network to generate MIP positron emission tomography image (PET)
The advantage of our proposed method consists of one model that is capable of generating different classes of lesions trained on a small sample size for each class of lesion.
In addition, we show that a walk through a latent space can be used as a tool to evaluate the images generated.
- Score: 3.947298454012977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most challenges in medical imaging is the lack of data. It is
proven that classical data augmentation methods are useful but still limited
due to the huge variation in images. Using generative adversarial networks
(GAN) is a promising way to address this problem, however, it is challenging to
train one model to generate different classes of lesions. In this paper, we
propose a deep convolutional conditional generative adversarial network to
generate MIP positron emission tomography image (PET) which is a 2D image that
represents a 3D volume for fast interpretation, according to different lesions
or non lesion (normal). The advantage of our proposed method consists of one
model that is capable of generating different classes of lesions trained on a
small sample size for each class of lesion, and showing a very promising
results. In addition, we show that a walk through a latent space can be used as
a tool to evaluate the images generated.
Related papers
- Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - CleftGAN: Adapting A Style-Based Generative Adversarial Network To
Create Images Depicting Cleft Lip Deformity [2.1647227058902336]
We have built a deep learning-based cleft lip generator designed to produce an almost unlimited number of artificial images exhibiting high-fidelity facsimiles of cleft lip.
We undertook a transfer learning protocol testing different versions of StyleGAN-ADA.
Training images depicting a variety of cleft deformities were pre-processed to adjust for rotation, scaling, color adjustment and background blurring.
arXiv Detail & Related papers (2023-10-12T01:25:21Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Generative Adversarial Networks for Brain Images Synthesis: A Review [2.609784101826762]
In medical imaging, image synthesis is the estimation process of one image (sequence, modality) from another image (sequence, modality)
generative adversarial network (GAN) as one of the most popular generative-based deep learning methods.
We summarized the recent developments of GANs for cross-modality brain image synthesis including CT to PET, CT to MRI, MRI to PET, and vice versa.
arXiv Detail & Related papers (2023-05-16T17:28:06Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Translational Lung Imaging Analysis Through Disentangled Representations [0.0]
We present a model capable of extracting disentangled information from images of different animal models and the mechanisms that generate the images.
It is optimized from images of pathological lung infected by Tuberculosis and is able: a) from an input slice, infer its position in a volume, the animal model to which it belongs, the damage present and even more, generate a mask covering the whole lung.
arXiv Detail & Related papers (2022-03-03T11:56:20Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Weakly Supervised PET Tumor Detection Using Class Response [3.947298454012977]
We present a novel approach to locate different type of lesions in positron emission tomography (PET) images using only a class label at the image-level.
The advantage of our proposed method consists of detecting the whole tumor volume in 3D images, using only two 2D images of PET image, and showing a very promising results.
arXiv Detail & Related papers (2020-03-18T17:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.