Marginal Contrastive Correspondence for Guided Image Generation
- URL: http://arxiv.org/abs/2204.00442v1
- Date: Fri, 1 Apr 2022 13:55:44 GMT
- Title: Marginal Contrastive Correspondence for Guided Image Generation
- Authors: Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu,
Changgong Zhang
- Abstract summary: Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar from two different domains.
Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains.
We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation.
- Score: 58.0605433671196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exemplar-based image translation establishes dense correspondences between a
conditional input and an exemplar (from two different domains) for leveraging
detailed exemplar styles to achieve realistic image translation. Existing work
builds the cross-domain correspondences implicitly by minimizing feature-wise
distances across the two domains. Without explicit exploitation of
domain-invariant features, this approach may not reduce the domain gap
effectively which often leads to sub-optimal correspondences and image
translation. We design a Marginal Contrastive Learning Network (MCL-Net) that
explores contrastive learning to learn domain-invariant features for realistic
exemplar-based image translation. Specifically, we design an innovative
marginal contrastive loss that guides to establish dense correspondences
explicitly. Nevertheless, building correspondence with domain-invariant
semantics alone may impair the texture patterns and lead to degraded texture
generation. We thus design a Self-Correlation Map (SCM) that incorporates scene
structures as auxiliary information which improves the built correspondences
substantially. Quantitative and qualitative experiments on multifarious image
translation tasks show that the proposed method outperforms the
state-of-the-art consistently.
Related papers
- Diffusion-based Image Translation with Label Guidance for Domain
Adaptive Semantic Segmentation [35.44771460784343]
Translating images from a source domain to a target domain for learning target models is one of the most common strategies in domain adaptive semantic segmentation (DASS)
Existing methods still struggle to preserve semantically-consistent local details between the original and translated images.
We present an innovative approach that addresses this challenge by using source-domain labels as explicit guidance during image translation.
arXiv Detail & Related papers (2023-08-23T18:01:01Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Unbalanced Feature Transport for Exemplar-based Image Translation [51.54421432912801]
This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in image translation.
We show that our method achieves superior image translation qualitatively and quantitatively as compared with the state-of-the-art.
arXiv Detail & Related papers (2021-06-19T12:07:48Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z) - Semantically Adaptive Image-to-image Translation for Domain Adaptation
of Semantic Segmentation [1.8275108630751844]
We address the problem of domain adaptation for semantic segmentation of street scenes.
Many state-of-the-art approaches focus on translating the source image while imposing that the result should be semantically consistent with the input.
We advocate that the image semantics can also be exploited to guide the translation algorithm.
arXiv Detail & Related papers (2020-09-02T16:16:50Z) - Cross-domain Correspondence Learning for Exemplar-based Image
Translation [59.35767271091425]
We present a framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain.
The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar.
We show that our method is superior to state-of-the-art methods in terms of image quality significantly.
arXiv Detail & Related papers (2020-04-12T09:10:57Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.