Improving the Latent Space of Image Style Transfer
- URL: http://arxiv.org/abs/2205.12135v1
- Date: Tue, 24 May 2022 15:13:01 GMT
- Title: Improving the Latent Space of Image Style Transfer
- Authors: Yunpeng Bai, Cairong Wang, Chun Yuan, Yanbo Fan, Jue Wang
- Abstract summary: In some cases, the feature statistics from the pre-trained encoder may not be consistent with the visual style we perceived.
In such an inappropriate latent space, the objective function of the existing methods will be optimized in the wrong direction.
We propose two contrastive training schemes to get a refined encoder that is more suitable for this task.
- Score: 24.37383949267162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing neural style transfer researches have studied to match statistical
information between the deep features of content and style images, which were
extracted by a pre-trained VGG, and achieved significant improvement in
synthesizing artistic images. However, in some cases, the feature statistics
from the pre-trained encoder may not be consistent with the visual style we
perceived. For example, the style distance between images of different styles
is less than that of the same style. In such an inappropriate latent space, the
objective function of the existing methods will be optimized in the wrong
direction, resulting in bad stylization results. In addition, the lack of
content details in the features extracted by the pre-trained encoder also leads
to the content leak problem. In order to solve these issues in the latent space
used by style transfer, we propose two contrastive training schemes to get a
refined encoder that is more suitable for this task. The style contrastive loss
pulls the stylized result closer to the same visual style image and pushes it
away from the content image. The content contrastive loss enables the encoder
to retain more available details. We can directly add our training scheme to
some existing style transfer methods and significantly improve their results.
Extensive experimental results demonstrate the effectiveness and superiority of
our methods.
Related papers
- DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - D2Styler: Advancing Arbitrary Style Transfer with Discrete Diffusion Methods [2.468658581089448]
We propose a novel framework called D$2$Styler (Discrete Diffusion Styler)
Our method uses Adaptive Instance Normalization (AdaIN) features as a context guide for the reverse diffusion process.
Experimental results demonstrate that D$2$Styler produces high-quality style-transferred images.
arXiv Detail & Related papers (2024-08-07T05:47:06Z) - Rethink Arbitrary Style Transfer with Transformer and Contrastive Learning [11.900404048019594]
In this paper, we introduce an innovative technique to improve the quality of stylized images.
Firstly, we propose Style Consistency Instance Normalization (SCIN), a method to refine the alignment between content and style features.
In addition, we have developed an Instance-based Contrastive Learning (ICL) approach designed to understand relationships among various styles.
arXiv Detail & Related papers (2024-04-21T08:52:22Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - Language-Driven Image Style Transfer [72.36790598245096]
We introduce a new task -- language-driven image style transfer (textttLDIST) -- to manipulate the style of a content image, guided by a text.
The discriminator considers the correlation between language and patches of style images or transferred results to jointly embed style instructions.
Experiments show that our CLVA is effective and achieves superb transferred results on textttLDIST.
arXiv Detail & Related papers (2021-06-01T01:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.