Synthetic-to-Real Domain Adaptation using Contrastive Unpaired
Translation
- URL: http://arxiv.org/abs/2203.09454v1
- Date: Thu, 17 Mar 2022 17:13:23 GMT
- Title: Synthetic-to-Real Domain Adaptation using Contrastive Unpaired
Translation
- Authors: Benedikt T. Imbusch, Max Schwarz, Sven Behnke
- Abstract summary: We propose a multi-step method to obtain training data without manual annotation effort.
From 3D object meshes, we generate images using a modern synthesis pipeline.
We utilize a state-of-the-art image-to-image translation method to adapt the synthetic images to the real domain.
- Score: 28.19031441659854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The usefulness of deep learning models in robotics is largely dependent on
the availability of training data. Manual annotation of training data is often
infeasible. Synthetic data is a viable alternative, but suffers from domain
gap. We propose a multi-step method to obtain training data without manual
annotation effort: From 3D object meshes, we generate images using a modern
synthesis pipeline. We utilize a state-of-the-art image-to-image translation
method to adapt the synthetic images to the real domain, minimizing the domain
gap in a learned manner. The translation network is trained from unpaired
images, i.e. just requires an un-annotated collection of real images. The
generated and refined images can then be used to train deep learning models for
a particular task. We also propose and evaluate extensions to the translation
method that further increase performance, such as patch-based training, which
shortens training time and increases global consistency. We evaluate our method
and demonstrate its effectiveness on two robotic datasets. We finally give
insight into the learned refinement operations.
Related papers
- Data Attribution for Text-to-Image Models by Unlearning Synthesized Images [71.23012718682634]
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image.
We propose a new approach that efficiently identifies highly-influential images.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Exploring Semantic Consistency in Unpaired Image Translation to Generate
Data for Surgical Applications [1.8011391924021904]
This study empirically investigates unpaired image translation methods for generating suitable data in surgical applications.
We find that a simple combination of structural-similarity loss and contrastive learning yields the most promising results.
arXiv Detail & Related papers (2023-09-06T14:43:22Z) - Joint one-sided synthetic unpaired image translation and segmentation
for colorectal cancer prevention [16.356954231068077]
We produce realistic synthetic images using a combination of 3D technologies and generative adversarial networks.
We propose CUT-seg, a joint training where a segmentation model and a generative model are jointly trained to produce realistic images.
As a part of this study we release Synth-Colon, an entirely synthetic dataset that includes 20000 realistic colon images.
arXiv Detail & Related papers (2023-07-20T22:09:04Z) - Image Captions are Natural Prompts for Text-to-Image Models [70.30915140413383]
We analyze the relationship between the training effect of synthetic data and the synthetic data distribution induced by prompts.
We propose a simple yet effective method that prompts text-to-image generative models to synthesize more informative and diverse training data.
Our method significantly improves the performance of models trained on synthetic training data.
arXiv Detail & Related papers (2023-07-17T14:38:11Z) - Synthetic Image Data for Deep Learning [0.294944680995069]
Realistic synthetic image data rendered from 3D models can be used to augment image sets and train image classification semantic segmentation models.
We show how high quality physically-based rendering and domain randomization can efficiently create a large synthetic dataset based on production 3D CAD models of a real vehicle.
arXiv Detail & Related papers (2022-12-12T20:28:13Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Domain Adaptation with Morphologic Segmentation [8.0698976170854]
We present a novel domain adaptation framework that uses morphologic segmentation to translate images from arbitrary input domains (real and synthetic) into a uniform output domain.
Our goal is to establish a preprocessing step that unifies data from multiple sources into a common representation.
We showcase the effectiveness of our approach by qualitatively and quantitatively evaluating our method on four data sets of simulated and real data of urban scenes.
arXiv Detail & Related papers (2020-06-16T17:06:02Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.