AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style Transfer
- URL: http://arxiv.org/abs/2409.14652v2
- Date: Tue, 24 Sep 2024 10:46:00 GMT
- Title: AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style Transfer
- Authors: Gen Li, Xianqiu Zheng, Yujian Li,
- Abstract summary: A research area that combines rational academic study with emotive artistic creation.
It aims to create a new image from a content image according to a target artistic style, maintaining the content's textural structural information.
Existing style transfer methods often significantly damage the texture lines of the content image during the style transformation.
We propose affinity-enhanced attentional network, which include the content affinity-enhanced attention (CAEA) module, the style affinity-enhanced attention (SAEA) module, and the hybrid attention (HA) module.
- Score: 4.639424509503966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Arbitrary artistic style transfer is a research area that combines rational academic study with emotive artistic creation. It aims to create a new image from a content image according to a target artistic style, maintaining the content's textural structural information while incorporating the artistic characteristics of the style image. However, existing style transfer methods often significantly damage the texture lines of the content image during the style transformation. To address these issues, we propose affinity-enhanced attentional network, which include the content affinity-enhanced attention (CAEA) module, the style affinity-enhanced attention (SAEA) module, and the hybrid attention (HA) module. The CAEA and SAEA modules first use attention to enhance content and style representations, followed by a detail enhanced (DE) module to reinforce detail features. The hybrid attention module adjusts the style feature distribution based on the content feature distribution. We also introduce the local dissimilarity loss based on affinity attention, which better preserves the affinity with content and style images. Experiments demonstrate that our work achieves better results in arbitrary style transfer than other state-of-the-art methods.
Related papers
- DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation [4.1177497612346]
Style transfer is an inventive process designed to create an image that maintains the essence of the original while embracing the visual style of another.
We introduce InstantStyle-Plus, an approach that prioritizes the integrity of the original content while seamlessly integrating the target style.
arXiv Detail & Related papers (2024-06-30T18:05:33Z) - ArtWeaver: Advanced Dynamic Style Integration via Diffusion Model [73.95608242322949]
Stylized Text-to-Image Generation (STIG) aims to generate images from text prompts and style reference images.
We present ArtWeaver, a novel framework that leverages pretrained Stable Diffusion to address challenges such as misinterpreted styles and inconsistent semantics.
arXiv Detail & Related papers (2024-05-24T07:19:40Z) - ALADIN-NST: Self-supervised disentangled representation learning of
artistic style through Neural Style Transfer [60.6863849241972]
We learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image.
We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics.
arXiv Detail & Related papers (2023-04-12T10:33:18Z) - Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss [51.309905690367835]
We introduce a novel arbitrary style transfer method with structure enhancement by combining the global and local loss.
Experimental results demonstrate that our method can generate higher-quality images with impressive visual effects.
arXiv Detail & Related papers (2022-07-23T07:02:57Z) - Style Transfer with Target Feature Palette and Attention Coloring [15.775618544581885]
A novel artistic stylization method with target feature palettes is proposed, which can transfer key features accurately.
Our stylized images exhibit state-of-the-art performance, with strength in preserving core structures and details of the content image.
arXiv Detail & Related papers (2021-11-07T08:09:20Z) - SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization [71.85169368997738]
Artistic style transfer aims to transfer the style characteristics of one image onto another image while retaining its content.
Self-Attention-based approaches have tackled this issue with partial success but suffer from unwanted artifacts.
This paper aims to combine the best of both worlds: self-attention and normalization.
arXiv Detail & Related papers (2021-05-13T08:01:01Z) - Arbitrary Video Style Transfer via Multi-Channel Correlation [84.75377967652753]
We propose Multi-Channel Correction network (MCCNet) to fuse exemplar style features and input content features for efficient style transfer.
MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on similarity with content features.
The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures.
arXiv Detail & Related papers (2020-09-17T01:30:46Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z) - A Content Transformation Block For Image Style Transfer [16.25958537802466]
This paper explicitly focuses on a content-and style-aware stylization of a content image.
We utilize similar content appearing in photographs and style samples to learn how style alters content details.
The robustness and speed of our model enables a video stylization in real-time and high definition.
arXiv Detail & Related papers (2020-03-18T18:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.