Generative Transition Mechanism to Image-to-Image Translation via
Encoded Transformation
- URL: http://arxiv.org/abs/2103.05193v1
- Date: Tue, 9 Mar 2021 02:56:03 GMT
- Title: Generative Transition Mechanism to Image-to-Image Translation via
Encoded Transformation
- Authors: Yaxin Shi, Xiaowei Zhou, Ping Liu, Ivor Tsang
- Abstract summary: We revisit the Image-to-Image (I2I) translation problem with transition consistency.
Existing I2I translation models mainly focus on maintaining consistency on results.
We propose to enforce both result consistency and transition consistency for I2I translation.
- Score: 40.11493448767101
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we revisit the Image-to-Image (I2I) translation problem with
transition consistency, namely the consistency defined on the conditional data
mapping between each data pairs. Explicitly parameterizing each data mappings
with a transition variable $t$, i.e., $x \overset{t(x,y)}{\mapsto}y$, we
discover that existing I2I translation models mainly focus on maintaining
consistency on results, e.g., image reconstruction or attribute prediction,
named result consistency in our paper. This restricts their generalization
ability to generate satisfactory results with unseen transitions in the test
phase. Consequently, we propose to enforce both result consistency and
transition consistency for I2I translation, to benefit the problem with a
closer consistency between the input and output. To benefit the generalization
ability of the translation model, we propose transition encoding to facilitate
explicit regularization of these two {kinds} of consistencies on unseen
transitions. We further generalize such explicitly regularized consistencies to
distribution-level, thus facilitating a generalized overall consistency for I2I
translation problems. With the above design, our proposed model, named
Transition Encoding GAN (TEGAN), can poss superb generalization ability to
generate realistic and semantically consistent translation results with unseen
transitions in the test phase. It also provides a unified understanding of the
existing GAN-based I2I transition models with our explicitly modeling of the
data mapping, i.e., transition. Experiments on four different I2I translation
tasks demonstrate the efficacy and generality of TEGAN.
Related papers
- Hierarchy Flow For High-Fidelity Image-to-Image Translation [38.87847690777645]
We propose a novel flow-based model to achieve better content preservation during translation.
Our approach achieves state-of-the-art performance, with convincing advantages in both strong- and normal-fidelity tasks.
arXiv Detail & Related papers (2023-08-14T03:11:17Z) - UTSGAN: Unseen Transition Suss GAN for Transition-Aware Image-to-image
Translation [57.99923293611923]
We introduce a transition-aware approach to I2I translation, where the data translation mapping is explicitly parameterized with a transition variable.
We propose the use of transition consistency, defined on the transition variable, to enable regularization of consistency on unobserved translations.
Based on these insights, we present Unseen Transition Suss GAN (UTSGAN), a generative framework that constructs a manifold for the transition with a transition encoder.
arXiv Detail & Related papers (2023-04-24T09:47:34Z) - Guided Image-to-Image Translation by Discriminator-Generator
Communication [71.86347329356244]
The goal of Image-to-image (I2I) translation is to transfer an image from a source domain to a target domain.
One major branch of this research is to formulate I2I translation based on Generative Adversarial Network (GAN)
arXiv Detail & Related papers (2023-03-07T02:29:36Z) - ParGAN: Learning Real Parametrizable Transformations [50.51405390150066]
We propose ParGAN, a generalization of the cycle-consistent GAN framework to learn image transformations.
The proposed generator takes as input both an image and a parametrization of the transformation.
We show how, with disjoint image domains with no annotated parametrization, our framework can create smooths as well as learn multiple transformations simultaneously.
arXiv Detail & Related papers (2022-11-09T16:16:06Z) - Beyond Deterministic Translation for Unsupervised Domain Adaptation [19.358300726820943]
In this work we challenge the common approach of using a one-to-one mapping (''translation'') between the source and target domains in unsupervised domain adaptation (UDA)
Instead, we rely on translation to capture inherent ambiguities between the source and target domains.
We report improvements over strong recent baselines, leading to state-of-the-art UDA results on two challenging semantic segmentation benchmarks.
arXiv Detail & Related papers (2022-02-15T23:03:33Z) - Unbalanced Feature Transport for Exemplar-based Image Translation [51.54421432912801]
This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in image translation.
We show that our method achieves superior image translation qualitatively and quantitatively as compared with the state-of-the-art.
arXiv Detail & Related papers (2021-06-19T12:07:48Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image
Translation [22.51574923085135]
Augmented Cyclic Consistency Regularization (A CCR) is a novel regularization method for unpaired I2I translation.
Our method outperforms the consistency regularized GAN (CR-GAN) in real-world translations.
arXiv Detail & Related papers (2020-02-29T06:20:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.