Arbitrary Style Transfer via Multi-Adaptation Network
- URL: http://arxiv.org/abs/2005.13219v2
- Date: Sun, 16 Aug 2020 05:28:46 GMT
- Title: Arbitrary Style Transfer via Multi-Adaptation Network
- Authors: Yingying Deng, Fan Tang, Weiming Dong, Wen Sun, Feiyue Huang,
Changsheng Xu
- Abstract summary: A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
- Score: 109.6765099732799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Arbitrary style transfer is a significant topic with research value and
application prospect. A desired style transfer, given a content image and
referenced style painting, would render the content image with the color tone
and vivid stroke patterns of the style painting while synchronously maintaining
the detailed content structure information. Style transfer approaches would
initially learn content and style representations of the content and style
references and then generate the stylized images guided by these
representations. In this paper, we propose the multi-adaptation network which
involves two self-adaptation (SA) modules and one co-adaptation (CA) module:
the SA modules adaptively disentangle the content and style representations,
i.e., content SA module uses position-wise self-attention to enhance content
representation and style SA module uses channel-wise self-attention to enhance
style representation; the CA module rearranges the distribution of style
representation based on content representation distribution by calculating the
local similarity between the disentangled content and style features in a
non-local fashion. Moreover, a new disentanglement loss function enables our
network to extract main style patterns and exact content structures to adapt to
various input images, respectively. Various qualitative and quantitative
experiments demonstrate that the proposed multi-adaptation network leads to
better results than the state-of-the-art style transfer methods.
Related papers
- DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style Transfer [4.639424509503966]
A research area that combines rational academic study with emotive artistic creation.
It aims to create a new image from a content image according to a target artistic style, maintaining the content's textural structural information.
Existing style transfer methods often significantly damage the texture lines of the content image during the style transformation.
We propose affinity-enhanced attentional network, which include the content affinity-enhanced attention (CAEA) module, the style affinity-enhanced attention (SAEA) module, and the hybrid attention (HA) module.
arXiv Detail & Related papers (2024-09-23T01:39:11Z) - StyleAdapter: A Unified Stylized Image Generation Model [97.24936247688824]
StyleAdapter is a unified stylized image generation model capable of producing a variety of stylized images.
It can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet.
arXiv Detail & Related papers (2023-09-04T19:16:46Z) - InfoStyler: Disentanglement Information Bottleneck for Artistic Style
Transfer [22.29381866838179]
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content.
We propose a novel information disentanglement method, named InfoStyler, to capture the minimal sufficient information for both content and style representations.
arXiv Detail & Related papers (2023-07-30T13:38:56Z) - Learning Dynamic Style Kernels for Artistic Style Transfer [26.19086645743083]
We propose a new scheme that learns em spatially adaptive kernels for per-pixel stylization.
Our proposed method outperforms state-of-the-art methods and exhibits superior performance in terms of visual quality and efficiency.
arXiv Detail & Related papers (2023-04-02T00:26:43Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [119.03470556503942]
Crossmodal matching problem is typically solved by learning a joint embedding space where semantic content shared between photo and sketch modalities are preserved.
An effective model needs to explicitly account for this style diversity, crucially, to unseen user styles.
Our model can not only disentangle the cross-modal shared semantic content, but can adapt the disentanglement to any unseen user style as well, making the model truly agnostic.
arXiv Detail & Related papers (2021-03-29T15:44:19Z) - Arbitrary Video Style Transfer via Multi-Channel Correlation [84.75377967652753]
We propose Multi-Channel Correction network (MCCNet) to fuse exemplar style features and input content features for efficient style transfer.
MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on similarity with content features.
The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures.
arXiv Detail & Related papers (2020-09-17T01:30:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.