Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution
- URL: http://arxiv.org/abs/2202.02779v1
- Date: Sun, 6 Feb 2022 14:12:34 GMT
- Title: Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution
- Authors: Somi Jeong, Jiyoung Lee, Kwanghoon Sohn
- Abstract summary: We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
- Score: 62.4972011636884
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past few years, image-to-image (I2I) translation methods have been
proposed to translate a given image into diverse outputs. Despite the
impressive results, they mainly focus on the I2I translation between two
domains, so the multi-domain I2I translation still remains a challenge. To
address this problem, we propose a novel multi-domain unsupervised
image-to-image translation (MDUIT) framework that leverages the decomposed
content feature and appearance adaptive convolution to translate an image into
a target appearance while preserving the given geometric content. We also
exploit a contrast learning objective, which improves the disentanglement
ability and effectively utilizes multi-domain image data in the training
process by pairing the semantically similar images. This allows our method to
learn the diverse mappings between multiple visual domains with only a single
framework. We show that the proposed method produces visually diverse and
plausible results in multiple domains compared to the state-of-the-art methods.
Related papers
- SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial
Network for an end-to-end image translation [18.93434486338439]
SCONE-GAN is shown to be effective for learning to generate realistic and diverse scenery images.
For more realistic and diverse image generation we introduce style reference image.
We validate the proposed algorithm for image-to-image translation and stylizing outdoor images.
arXiv Detail & Related papers (2023-11-07T10:29:16Z) - Separating Content and Style for Unsupervised Image-to-Image Translation [20.44733685446886]
Unsupervised image-to-image translation aims to learn the mapping between two visual domains with unpaired samples.
We propose to separate the content code and style code simultaneously in a unified framework.
Based on the correlation between the latent features and the high-level domain-invariant tasks, the proposed framework demonstrates superior performance.
arXiv Detail & Related papers (2021-10-27T12:56:50Z) - Unaligned Image-to-Image Translation by Learning to Reweight [40.93678165567824]
Unsupervised image-to-image translation aims at learning the mapping from the source to target domain without using paired images for training.
An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned.
We propose to select images based on importance reweighting and develop a method to learn the weights and perform translation simultaneously and automatically.
arXiv Detail & Related papers (2021-09-24T04:08:22Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation [59.73535607392732]
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another.
We propose the use of an image retrieval system to assist the image-to-image translation task.
arXiv Detail & Related papers (2020-08-11T20:11:53Z) - Cross-domain Correspondence Learning for Exemplar-based Image
Translation [59.35767271091425]
We present a framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain.
The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar.
We show that our method is superior to state-of-the-art methods in terms of image quality significantly.
arXiv Detail & Related papers (2020-04-12T09:10:57Z) - GMM-UNIT: Unsupervised Multi-Domain and Multi-Modal Image-to-Image
Translation via Attribute Gaussian Mixture Modeling [66.50914391679375]
Unsupervised image-to-image translation (UNIT) aims at learning a mapping between several visual domains by using unpaired training images.
Recent studies have shown remarkable success for multiple domains but they suffer from two main limitations.
We propose a method named GMM-UNIT, which is based on a content-attribute disentangled representation where the space is fitted with a GMM.
arXiv Detail & Related papers (2020-03-15T10:18:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.