Where is the disease? Semi-supervised pseudo-normality synthesis from an
abnormal image
- URL: http://arxiv.org/abs/2106.15345v1
- Date: Thu, 24 Jun 2021 05:56:41 GMT
- Title: Where is the disease? Semi-supervised pseudo-normality synthesis from an
abnormal image
- Authors: Yuanqi Du, Quan Quan, Hu Han, S. Kevin Zhou
- Abstract summary: We propose a Semi-supervised Medical Image generative LEarning network (SMILE) to generate realistic pseudo-normal images.
Our model outperforms the best state-of-the-art model by up to 6% for data augmentation task and 3% in generating high-quality images.
- Score: 24.547317269668312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pseudo-normality synthesis, which computationally generates a pseudo-normal
image from an abnormal one (e.g., with lesions), is critical in many
perspectives, from lesion detection, data augmentation to clinical surgery
suggestion. However, it is challenging to generate high-quality pseudo-normal
images in the absence of the lesion information. Thus, expensive lesion
segmentation data have been introduced to provide lesion information for the
generative models and improve the quality of the synthetic images. In this
paper, we aim to alleviate the need of a large amount of lesion segmentation
data when generating pseudo-normal images. We propose a Semi-supervised Medical
Image generative LEarning network (SMILE) which not only utilizes limited
medical images with segmentation masks, but also leverages massive medical
images without segmentation masks to generate realistic pseudo-normal images.
Extensive experiments show that our model outperforms the best state-of-the-art
model by up to 6% for data augmentation task and 3% in generating high-quality
images. Moreover, the proposed semi-supervised learning achieves comparable
medical image synthesis quality with supervised learning model, using only 50
of segmentation data.
Related papers
- Discriminative Hamiltonian Variational Autoencoder for Accurate Tumor Segmentation in Data-Scarce Regimes [2.8498944632323755]
We propose an end-to-end hybrid architecture for medical image segmentation.
We use Hamiltonian Variational Autoencoders (HVAE) and a discriminative regularization to improve the quality of generated images.
Our architecture operates on a slice-by-slice basis to segment 3D volumes, capitilizing on the richly augmented dataset.
arXiv Detail & Related papers (2024-06-17T15:42:08Z) - Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image [63.59114880750643]
We introduce a novel Spatial-aware Attention Generative Adrialversa Network (SAGAN) for one-class semi-supervised generation of health images.
SAGAN generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images.
Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-05-21T15:41:34Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Could We Generate Cytology Images from Histopathology Images? An Empirical Study [1.791005104399795]
In this study, we have explored traditional image-to-image transfer models like CycleGAN, and Neural Style Transfer.
In this study, we have explored traditional image-to-image transfer models like CycleGAN, and Neural Style Transfer.
arXiv Detail & Related papers (2024-03-16T10:43:12Z) - GAN-GA: A Generative Model based on Genetic Algorithm for Medical Image
Generation [0.0]
Generative models offer a promising solution for addressing medical image shortage problems.
This paper proposes the GAN-GA, a generative model optimized by embedding a genetic algorithm.
The proposed model enhances image fidelity and diversity while preserving distinctive features.
arXiv Detail & Related papers (2023-12-30T20:16:45Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.