Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors
- URL: http://arxiv.org/abs/2011.01215v4
- Date: Sun, 18 Apr 2021 05:48:17 GMT
- Title: Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors
- Authors: Qi Mao, Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, Siwei Ma,
Ming-Hsuan Yang
- Abstract summary: We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
- Score: 120.13149176992896
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent image-to-image (I2I) translation algorithms focus on learning the
mapping from a source to a target domain. However, the continuous translation
problem that synthesizes intermediate results between two domains has not been
well-studied in the literature. Generating a smooth sequence of intermediate
results bridges the gap of two different domains, facilitating the morphing
effect across domains. Existing I2I approaches are limited to either
intra-domain or deterministic inter-domain continuous translation. In this
work, we present an effectively signed attribute vector, which enables
continuous translation on diverse mapping paths across various domains. In
particular, we introduce a unified attribute space shared by all domains that
utilize the sign operation to encode the domain information, thereby allowing
the interpolation on attribute vectors of different domains. To enhance the
visual quality of continuous translation results, we generate a trajectory
between two sign-symmetrical attribute vectors and leverage the domain
information of the interpolated results along the trajectory for adversarial
training. We evaluate the proposed method on a wide range of I2I translation
tasks. Both qualitative and quantitative results demonstrate that the proposed
framework generates more high-quality continuous translation results against
the state-of-the-art methods.
Related papers
- Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Joint Attention-Driven Domain Fusion and Noise-Tolerant Learning for
Multi-Source Domain Adaptation [2.734665397040629]
Multi-source Unsupervised Domain Adaptation transfers knowledge from multiple source domains with labeled data to an unlabeled target domain.
The distribution discrepancy between different domains and the noisy pseudo-labels in the target domain both lead to performance bottlenecks.
We propose an approach that integrates Attention-driven Domain fusion and Noise-Tolerant learning (ADNT) to address the two issues mentioned above.
arXiv Detail & Related papers (2022-08-05T01:08:41Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Disentangled Unsupervised Image Translation via Restricted Information
Flow [61.44666983942965]
Many state-of-art methods hard-code the desired shared-vs-specific split into their architecture.
We propose a new method that does not rely on inductive architectural biases.
We show that the proposed method achieves consistently high manipulation accuracy across two synthetic and one natural dataset.
arXiv Detail & Related papers (2021-11-26T00:27:54Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - Learning Unsupervised Cross-domain Image-to-Image Translation Using a
Shared Discriminator [2.1377923666134118]
Unsupervised image-to-image translation is used to transform images from a source domain to generate images in a target domain without using source-target image pairs.
We propose a new method that uses a single shared discriminator between the two GANs, which improves the overall efficacy.
Our results indicate that even without adding attention mechanisms, our method performs at par with attention-based methods and generates images of comparable quality.
arXiv Detail & Related papers (2021-02-09T08:26:23Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.