Joint one-sided synthetic unpaired image translation and segmentation
for colorectal cancer prevention
- URL: http://arxiv.org/abs/2307.11253v1
- Date: Thu, 20 Jul 2023 22:09:04 GMT
- Title: Joint one-sided synthetic unpaired image translation and segmentation
for colorectal cancer prevention
- Authors: Enric Moreu, Eric Arazo, Kevin McGuinness, Noel E. O'Connor
- Abstract summary: We produce realistic synthetic images using a combination of 3D technologies and generative adversarial networks.
We propose CUT-seg, a joint training where a segmentation model and a generative model are jointly trained to produce realistic images.
As a part of this study we release Synth-Colon, an entirely synthetic dataset that includes 20000 realistic colon images.
- Score: 16.356954231068077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has shown excellent performance in analysing medical images.
However, datasets are difficult to obtain due privacy issues, standardization
problems, and lack of annotations. We address these problems by producing
realistic synthetic images using a combination of 3D technologies and
generative adversarial networks. We propose CUT-seg, a joint training where a
segmentation model and a generative model are jointly trained to produce
realistic images while learning to segment polyps. We take advantage of recent
one-sided translation models because they use significantly less memory,
allowing us to add a segmentation model in the training loop. CUT-seg performs
better, is computationally less expensive, and requires less real images than
other memory-intensive image translation approaches that require two stage
training. Promising results are achieved on five real polyp segmentation
datasets using only one real image and zero real annotations. As a part of this
study we release Synth-Colon, an entirely synthetic dataset that includes 20000
realistic colon images and additional details about depth and 3D geometry:
https://enric1994.github.io/synth-colon
Related papers
- Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion
Models [61.906934570771256]
We present a generic dataset generation model that can produce diverse synthetic images and perception annotations.
Our method builds upon the pre-trained diffusion model and extends text-guided image synthesis to perception data generation.
We show that the rich latent code of the diffusion model can be effectively decoded as accurate perception annotations using a decoder module.
arXiv Detail & Related papers (2023-08-11T14:38:11Z) - Image Captions are Natural Prompts for Text-to-Image Models [70.30915140413383]
We analyze the relationship between the training effect of synthetic data and the synthetic data distribution induced by prompts.
We propose a simple yet effective method that prompts text-to-image generative models to synthesize more informative and diverse training data.
Our method significantly improves the performance of models trained on synthetic training data.
arXiv Detail & Related papers (2023-07-17T14:38:11Z) - Synthetic-to-Real Domain Adaptation using Contrastive Unpaired
Translation [28.19031441659854]
We propose a multi-step method to obtain training data without manual annotation effort.
From 3D object meshes, we generate images using a modern synthesis pipeline.
We utilize a state-of-the-art image-to-image translation method to adapt the synthetic images to the real domain.
arXiv Detail & Related papers (2022-03-17T17:13:23Z) - Synthetic data for unsupervised polyp segmentation [16.320983705522423]
We produce realistic synthetic colon images using a combination of 3D technologies and generative adversarial networks.
Our fully unsupervised method achieves promising results on five real polyp segmentation datasets.
As a part of this study we release Synth-Colon, an entirely synthetic dataset that includes 20000 realistic colon images.
arXiv Detail & Related papers (2022-02-17T14:32:33Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Image Translation for Medical Image Generation -- Ischemic Stroke
Lesions [0.0]
Synthetic databases with annotated pathologies could provide the required amounts of training data.
We train different image-to-image translation models to synthesize magnetic resonance images of brain volumes with and without stroke lesions.
We show that for a small database of only 10 or 50 clinical cases, synthetic data augmentation yields significant improvement.
arXiv Detail & Related papers (2020-10-05T09:12:28Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.