Unbalanced Feature Transport for Exemplar-based Image Translation
- URL: http://arxiv.org/abs/2106.10482v1
- Date: Sat, 19 Jun 2021 12:07:48 GMT
- Title: Unbalanced Feature Transport for Exemplar-based Image Translation
- Authors: Fangneng Zhan, Yingchen Yu, Kaiwen Cui, Gongjie Zhang, Shijian Lu,
Jianxiong Pan, Changgong Zhang, Feiying Ma, Xuansong Xie, Chunyan Miao
- Abstract summary: This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in image translation.
We show that our method achieves superior image translation qualitatively and quantitatively as compared with the state-of-the-art.
- Score: 51.54421432912801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the great success of GANs in images translation with different
conditioned inputs such as semantic segmentation and edge maps, generating
high-fidelity realistic images with reference styles remains a grand challenge
in conditional image-to-image translation. This paper presents a general image
translation framework that incorporates optimal transport for feature alignment
between conditional inputs and style exemplars in image translation. The
introduction of optimal transport mitigates the constraint of many-to-one
feature matching significantly while building up accurate semantic
correspondences between conditional inputs and exemplars. We design a novel
unbalanced optimal transport to address the transport between features with
deviational distributions which exists widely between conditional inputs and
exemplars. In addition, we design a semantic-activation normalization scheme
that injects style features of exemplars into the image translation process
successfully. Extensive experiments over multiple image translation tasks show
that our method achieves superior image translation qualitatively and
quantitatively as compared with the state-of-the-art.
Related papers
- Optimal Image Transport on Sparse Dictionaries [2.7855886538423182]
We derive a novel optimal image transport algorithm over sparse dictionaries by taking advantage of Sparse Representation (SR) and Optimal Transport (OT)
We demonstrate its versatility and many benefits to different image-to-image translation tasks, in particular image color transform and artistic style transfer, and show the plausible results for photo-realistic transferred effects.
arXiv Detail & Related papers (2023-11-03T15:37:01Z) - Improving Diffusion-based Image Translation using Asymmetric Gradient
Guidance [51.188396199083336]
We present an approach that guides the reverse process of diffusion sampling by applying asymmetric gradient guidance.
Our model's adaptability allows it to be implemented with both image-fusion and latent-dif models.
Experiments show that our method outperforms various state-of-the-art models in image translation tasks.
arXiv Detail & Related papers (2023-06-07T12:56:56Z) - Vector Quantized Image-to-Image Translation [31.65282783830092]
We propose introducing the vector quantization technique into the image-to-image translation framework.
Our framework achieves comparable performance to the state-of-the-art image-to-image translation and image extension methods.
arXiv Detail & Related papers (2022-07-27T04:22:29Z) - Pretraining is All You Need for Image-to-Image Translation [59.43151345732397]
We propose to use pretraining to boost general image-to-image translation.
We show that the proposed pretraining-based image-to-image translation (PITI) is capable of synthesizing images of unprecedented realism and faithfulness.
arXiv Detail & Related papers (2022-05-25T17:58:26Z) - Unsupervised Image-to-Image Translation with Generative Prior [103.54337984566877]
Unsupervised image-to-image translation aims to learn the translation between two visual domains without paired data.
We present a novel framework, Generative Prior-guided UN Image-to-image Translation (GP-UNIT), to improve the overall quality and applicability of the translation algorithm.
arXiv Detail & Related papers (2022-04-07T17:59:23Z) - Marginal Contrastive Correspondence for Guided Image Generation [58.0605433671196]
Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar from two different domains.
Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains.
We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation.
arXiv Detail & Related papers (2022-04-01T13:55:44Z) - Semi-Supervised Image-to-Image Translation using Latent Space Mapping [37.232496213047845]
We introduce a general framework for semi-supervised image translation.
Our main idea is to learn the translation over the latent feature space instead of the image space.
Thanks to the low dimensional feature space, it is easier to find the desired mapping function.
arXiv Detail & Related papers (2022-03-29T05:14:26Z) - Smoothing the Disentangled Latent Style Space for Unsupervised
Image-to-Image Translation [56.55178339375146]
Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic results.
We propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space.
arXiv Detail & Related papers (2021-06-16T17:58:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.