CS$^2$: A Controllable and Simultaneous Synthesizer of Images and
Annotations with Minimal Human Intervention
- URL: http://arxiv.org/abs/2206.13394v1
- Date: Mon, 20 Jun 2022 15:09:10 GMT
- Title: CS$^2$: A Controllable and Simultaneous Synthesizer of Images and
Annotations with Minimal Human Intervention
- Authors: Xiaodan Xing, Jiahao Huang, Yang Nan, Yinzhe Wu, Chengjia Wang, Zhifan
Gao, Simon Walsh, Guang Yang
- Abstract summary: We propose a novel controllable and simultaneous synthesizer (dubbed CS$2$) to generate both realistic images and corresponding annotations at the same time.
Our contributions include 1) a conditional image synthesis network that receives both style information from reference CT images and structural information from unsupervised segmentation masks, and 2) a corresponding segmentation mask network to automatically segment these synthesized images simultaneously.
- Score: 3.465671939864428
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The destitution of image data and corresponding expert annotations limit the
training capacities of AI diagnostic models and potentially inhibit their
performance. To address such a problem of data and label scarcity, generative
models have been developed to augment the training datasets. Previously
proposed generative models usually require manually adjusted annotations (e.g.,
segmentation masks) or need pre-labeling. However, studies have found that
these pre-labeling based methods can induce hallucinating artifacts, which
might mislead the downstream clinical tasks, while manual adjustment could be
onerous and subjective. To avoid manual adjustment and pre-labeling, we propose
a novel controllable and simultaneous synthesizer (dubbed CS$^2$) in this study
to generate both realistic images and corresponding annotations at the same
time. Our CS$^2$ model is trained and validated using high resolution CT (HRCT)
data collected from COVID-19 patients to realize an efficient infections
segmentation with minimal human intervention. Our contributions include 1) a
conditional image synthesis network that receives both style information from
reference CT images and structural information from unsupervised segmentation
masks, and 2) a corresponding segmentation mask synthesis network to
automatically segment these synthesized images simultaneously. Our experimental
studies on HRCT scans collected from COVID-19 patients demonstrate that our
CS$^2$ model can lead to realistic synthesized datasets and promising
segmentation results of COVID infections compared to the state-of-the-art
nnUNet trained and fine-tuned in a fully supervised manner.
Related papers
- Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - Unsupervised Contrastive Analysis for Salient Pattern Detection using Conditional Diffusion Models [13.970483987621135]
Contrastive Analysis (CA) aims to identify patterns in images that allow distinguishing between a background (BG) dataset and a target (TG) dataset (i.e. unhealthy subjects)
Recent works on this topic rely on variational autoencoders (VAE) or contrastive learning strategies to learn the patterns that separate TG samples from BG samples in a supervised manner.
We employ a self-supervised contrastive encoder to learn a latent representation encoding only common patterns from input images, using samples exclusively from the BG dataset during training, and approximating the distribution of the target patterns by leveraging data augmentation techniques.
arXiv Detail & Related papers (2024-06-02T15:19:07Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation [2.4113205575263708]
We propose an image synthesis method that utilizes denoising diffusion probabilistic models (DDPMs) to automatically generate retinal optical coherence tomography ( OCT) images.
We observe a consistent improvement in layer segmentation accuracy, which is validated using various neural networks.
These findings demonstrate the promising potential of DDPMs in reducing the need for manual annotations of retinal OCT images.
arXiv Detail & Related papers (2023-11-09T16:09:24Z) - Self-Supervised and Semi-Supervised Polyp Segmentation using Synthetic
Data [16.356954231068077]
Early detection of colorectal polyps is of utmost importance for their treatment and for colorectal cancer prevention.
Computer vision techniques have the potential to aid professionals in the diagnosis stage, where colonoscopies are manually carried out to examine the entirety of the patient's colon.
The main challenge in medical imaging is the lack of data, and a further challenge specific to polyp segmentation approaches is the difficulty of manually labeling the available data.
We propose an end-to-end model for polyp segmentation that integrates real and synthetic data to artificially increase the size of the datasets and aid the training when unlabeled samples are available.
arXiv Detail & Related papers (2023-07-22T09:57:58Z) - Less is More: Unsupervised Mask-guided Annotated CT Image Synthesis with
Minimum Manual Segmentations [2.1785903900600316]
We propose a novel strategy for medical image synthesis, namely Unsupervised Mask (UM)-guided synthesis.
UM-guided synthesis provided high-quality synthetic images with significantly higher fidelity, variety, and utility.
arXiv Detail & Related papers (2023-03-19T20:30:35Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.