Unaligned Image-to-Image Translation by Learning to Reweight
- URL: http://arxiv.org/abs/2109.11736v1
- Date: Fri, 24 Sep 2021 04:08:22 GMT
- Title: Unaligned Image-to-Image Translation by Learning to Reweight
- Authors: Shaoan Xie, Mingming Gong, Yanwu Xu, and Kun Zhang
- Abstract summary: Unsupervised image-to-image translation aims at learning the mapping from the source to target domain without using paired images for training.
An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned.
We propose to select images based on importance reweighting and develop a method to learn the weights and perform translation simultaneously and automatically.
- Score: 40.93678165567824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised image-to-image translation aims at learning the mapping from the
source to target domain without using paired images for training. An essential
yet restrictive assumption for unsupervised image translation is that the two
domains are aligned, e.g., for the selfie2anime task, the anime (selfie) domain
must contain only anime (selfie) face images that can be translated to some
images in the other domain. Collecting aligned domains can be laborious and
needs lots of attention. In this paper, we consider the task of image
translation between two unaligned domains, which may arise for various possible
reasons. To solve this problem, we propose to select images based on importance
reweighting and develop a method to learn the weights and perform translation
simultaneously and automatically. We compare the proposed method with
state-of-the-art image translation approaches and present qualitative and
quantitative results on different tasks with unaligned domains. Extensive
empirical evidence demonstrates the usefulness of the proposed problem
formulation and the superiority of our method.
Related papers
- ACE: Zero-Shot Image to Image Translation via Pretrained
Auto-Contrastive-Encoder [2.1874189959020427]
We propose a new approach to extract image features by learning the similarities and differences of samples within the same data distribution.
The design of ACE enables us to achieve zero-shot image-to-image translation with no training on image translation tasks for the first time.
Our model achieves competitive results on multimodal image translation tasks with zero-shot learning as well.
arXiv Detail & Related papers (2023-02-22T23:52:23Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal
Transfer [53.79505340315916]
We introduce BalaGAN, specifically designed to tackle the domain imbalance problem.
We leverage the latent modalities of the richer domain to turn the image-to-image translation problem into a balanced, multi-class, and conditional translation problem.
We show that BalaGAN outperforms strong baselines of both unconditioned and style-transfer-based image-to-image translation methods.
arXiv Detail & Related papers (2020-10-05T14:16:41Z) - Crossing-Domain Generative Adversarial Networks for Unsupervised
Multi-Domain Image-to-Image Translation [12.692904507625036]
We propose a general framework for unsupervised image-to-image translation across multiple domains.
Our proposed framework consists of a pair of encoders along with a pair of GANs which learns high-level features across different domains to generate diverse and realistic samples from.
arXiv Detail & Related papers (2020-08-27T01:54:07Z) - Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation [59.73535607392732]
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another.
We propose the use of an image retrieval system to assist the image-to-image translation task.
arXiv Detail & Related papers (2020-08-11T20:11:53Z) - Contrastive Learning for Unpaired Image-to-Image Translation [64.47477071705866]
In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain.
We propose a framework based on contrastive learning to maximize mutual information between the two.
We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time.
arXiv Detail & Related papers (2020-07-30T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.