Less is More: Unsupervised Mask-guided Annotated CT Image Synthesis with
Minimum Manual Segmentations
- URL: http://arxiv.org/abs/2303.12747v1
- Date: Sun, 19 Mar 2023 20:30:35 GMT
- Title: Less is More: Unsupervised Mask-guided Annotated CT Image Synthesis with
Minimum Manual Segmentations
- Authors: Xiaodan Xing, Giorgos Papanastasiou, Simon Walsh, Guang Yang
- Abstract summary: We propose a novel strategy for medical image synthesis, namely Unsupervised Mask (UM)-guided synthesis.
UM-guided synthesis provided high-quality synthetic images with significantly higher fidelity, variety, and utility.
- Score: 2.1785903900600316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a pragmatic data augmentation tool, data synthesis has generally returned
dividends in performance for deep learning based medical image analysis.
However, generating corresponding segmentation masks for synthetic medical
images is laborious and subjective. To obtain paired synthetic medical images
and segmentations, conditional generative models that use segmentation masks as
synthesis conditions were proposed. However, these segmentation
mask-conditioned generative models still relied on large, varied, and labeled
training datasets, and they could only provide limited constraints on human
anatomical structures, leading to unrealistic image features. Moreover, the
invariant pixel-level conditions could reduce the variety of synthetic lesions
and thus reduce the efficacy of data augmentation. To address these issues, in
this work, we propose a novel strategy for medical image synthesis, namely
Unsupervised Mask (UM)-guided synthesis, to obtain both synthetic images and
segmentations using limited manual segmentation labels. We first develop a
superpixel based algorithm to generate unsupervised structural guidance and
then design a conditional generative model to synthesize images and annotations
simultaneously from those unsupervised masks in a semi-supervised multi-task
setting. In addition, we devise a multi-scale multi-task Fr\'echet Inception
Distance (MM-FID) and multi-scale multi-task standard deviation (MM-STD) to
harness both fidelity and variety evaluations of synthetic CT images. With
multiple analyses on different scales, we could produce stable image quality
measurements with high reproducibility. Compared with the segmentation mask
guided synthesis, our UM-guided synthesis provided high-quality synthetic
images with significantly higher fidelity, variety, and utility ($p<0.05$ by
Wilcoxon Signed Ranked test).
Related papers
- EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - An Attentive-based Generative Model for Medical Image Synthesis [18.94900480135376]
We propose an attention-based dual contrast generative model, called ADC-cycleGAN, which can synthesize medical images from unpaired data with multiple slices.
The model integrates a dual contrast loss term with the CycleGAN loss to ensure that the synthesized images are distinguishable from the source domain.
Experimental results demonstrate that the proposed ADC-cycleGAN model produces comparable samples to other state-of-the-art generative models.
arXiv Detail & Related papers (2023-06-02T14:17:37Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - A Self-attention Guided Multi-scale Gradient GAN for Diversified X-ray
Image Synthesis [0.6308539010172307]
Generative Adversarial Networks (GANs) are utilized to address the data limitation problem via the generation of synthetic images.
Training challenges such as mode collapse, non-convergence, and instability degrade a GAN's performance in synthesizing diversified and high-quality images.
This work proposes an attention-guided multi-scale gradient GAN architecture to model the relationship between long-range dependencies of biomedical image features.
arXiv Detail & Related papers (2022-10-09T13:17:17Z) - One-Shot Synthesis of Images and Segmentation Masks [28.119303696418882]
Joint synthesis of images and segmentation masks with generative adversarial networks (GANs) is promising to reduce the effort needed for collecting image data with pixel-wise annotations.
To learn high-fidelity image-mask synthesis, existing GAN approaches first need a pre-training phase requiring large amounts of image data.
We introduce our OSMIS model which enables the synthesis of segmentation masks that are precisely aligned to the generated images in the one-shot regime.
arXiv Detail & Related papers (2022-09-15T18:00:55Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - CS$^2$: A Controllable and Simultaneous Synthesizer of Images and
Annotations with Minimal Human Intervention [3.465671939864428]
We propose a novel controllable and simultaneous synthesizer (dubbed CS$2$) to generate both realistic images and corresponding annotations at the same time.
Our contributions include 1) a conditional image synthesis network that receives both style information from reference CT images and structural information from unsupervised segmentation masks, and 2) a corresponding segmentation mask network to automatically segment these synthesized images simultaneously.
arXiv Detail & Related papers (2022-06-20T15:09:10Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - You Only Need Adversarial Supervision for Semantic Image Synthesis [84.83711654797342]
We propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results.
We show that images synthesized by our model are more diverse and follow the color and texture of real images more closely.
arXiv Detail & Related papers (2020-12-08T23:00:48Z) - Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood
Estimation [54.17177006826262]
We develop a new generic conditional image synthesis method based on Implicit Maximum Likelihood Estimation (IMLE)
We demonstrate improved multimodal image synthesis performance on two tasks, single image super-resolution and image synthesis from scene layouts.
arXiv Detail & Related papers (2020-04-07T03:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.