TSSAT: Two-Stage Statistics-Aware Transformation for Artistic Style
Transfer
- URL: http://arxiv.org/abs/2309.06004v1
- Date: Tue, 12 Sep 2023 07:02:13 GMT
- Title: TSSAT: Two-Stage Statistics-Aware Transformation for Artistic Style
Transfer
- Authors: Haibo Chen, Lei Zhao, Jun Li, and Jian Yang
- Abstract summary: Artistic style transfer aims to create new artistic images by rendering a given photograph with the target artistic style.
Existing methods learn styles simply based on global statistics or local patches, lacking careful consideration of the drawing process in practice.
We propose a Two-Stage Statistics-Aware Transformation (TSSAT) module, which first builds the global style foundation by aligning the global statistics of content and style features.
To further enhance both content and style representations, we introduce two novel losses: an attention-based content loss and a patch-based style loss.
- Score: 22.16475032434281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artistic style transfer aims to create new artistic images by rendering a
given photograph with the target artistic style. Existing methods learn styles
simply based on global statistics or local patches, lacking careful
consideration of the drawing process in practice. Consequently, the stylization
results either fail to capture abundant and diversified local style patterns,
or contain undesired semantic information of the style image and deviate from
the global style distribution. To address this issue, we imitate the drawing
process of humans and propose a Two-Stage Statistics-Aware Transformation
(TSSAT) module, which first builds the global style foundation by aligning the
global statistics of content and style features and then further enriches local
style details by swapping the local statistics (instead of local features) in a
patch-wise manner, significantly improving the stylization effects. Moreover,
to further enhance both content and style representations, we introduce two
novel losses: an attention-based content loss and a patch-based style loss,
where the former enables better content preservation by enforcing the semantic
relation in the content image to be retained during stylization, and the latter
focuses on increasing the local style similarity between the style and stylized
images. Extensive qualitative and quantitative experiments verify the
effectiveness of our method.
Related papers
- DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation [4.1177497612346]
Style transfer is an inventive process designed to create an image that maintains the essence of the original while embracing the visual style of another.
We introduce InstantStyle-Plus, an approach that prioritizes the integrity of the original content while seamlessly integrating the target style.
arXiv Detail & Related papers (2024-06-30T18:05:33Z) - InfoStyler: Disentanglement Information Bottleneck for Artistic Style
Transfer [22.29381866838179]
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content.
We propose a novel information disentanglement method, named InfoStyler, to capture the minimal sufficient information for both content and style representations.
arXiv Detail & Related papers (2023-07-30T13:38:56Z) - ALADIN-NST: Self-supervised disentangled representation learning of
artistic style through Neural Style Transfer [60.6863849241972]
We learn a representation of visual artistic style more strongly disentangled from the semantic content depicted in an image.
We show that strongly addressing the disentanglement of style and content leads to large gains in style-specific metrics.
arXiv Detail & Related papers (2023-04-12T10:33:18Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss [51.309905690367835]
We introduce a novel arbitrary style transfer method with structure enhancement by combining the global and local loss.
Experimental results demonstrate that our method can generate higher-quality images with impressive visual effects.
arXiv Detail & Related papers (2022-07-23T07:02:57Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z) - Parameter-Free Style Projection for Arbitrary Style Transfer [64.06126075460722]
This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
arXiv Detail & Related papers (2020-03-17T13:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.