Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation
- URL: http://arxiv.org/abs/2106.09016v1
- Date: Wed, 16 Jun 2021 17:58:21 GMT
- Title: Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation
- Authors: Yahui Liu, Enver Sangineto, Yajing Chen, Linchao Bao, Haoxian Zhang,
Nicu Sebe, Bruno Lepri, Wei Wang and Marco De Nadai
- Abstract summary: Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
- Score: 56.55178339375146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-to-Image (I2I) multi-domain translation models are usually evaluated
also using the quality of their semantic interpolation results. However,
state-of-the-art models frequently show abrupt changes in the image appearance
during interpolation, and usually perform poorly in interpolations across
domains. In this paper, we propose a new training protocol based on three
specific losses which help a translation network to learn a smooth and
disentangled latent style space in which: 1) Both intra- and inter-domain
interpolations correspond to gradual changes in the generated images and 2) The
content of the source image is better preserved during the translation.
Moreover, we propose a novel evaluation metric to properly measure the
smoothness of latent style space of I2I translation models. The proposed method
can be plugged into existing translation approaches, and our extensive
experiments on different datasets show that it can significantly boost the
quality of the generated images and the graduality of the interpolations.
Related papers
- Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Marginal Contrastive Correspondence for Guided Image Generation [58.0605433671196]
Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar from two different domains.
Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains.
We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation.
arXiv Detail & Related papers (2022-04-01T13:55:44Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Flow-based Deformation Guidance for Unpaired Multi-Contrast MRI
Image-to-Image Translation [7.8333615755210175]
In this paper, we introduce a novel approach to unpaired image-to-image translation based on the invertible architecture.
We utilize the temporal information between consecutive slices to provide more constraints to the optimization for transforming one domain to another in unpaired medical images.
arXiv Detail & Related papers (2020-12-03T09:10:22Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.