Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss
- URL: http://arxiv.org/abs/2207.11438v1
- Date: Sat, 23 Jul 2022 07:02:57 GMT
- Title: Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss
- Authors: Lizhen Long and Chi-Man Pun
- Abstract summary: We introduce a novel arbitrary style transfer method with structure enhancement by combining the global and local loss.
Experimental results demonstrate that our method can generate higher-quality images with impressive visual effects.
- Score: 51.309905690367835
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Arbitrary style transfer generates an artistic image which combines the
structure of a content image and the artistic style of the artwork by using
only one trained network. The image representation used in this method contains
content structure representation and the style patterns representation, which
is usually the features representation of high-level in the pre-trained
classification networks. However, the traditional classification networks were
designed for classification which usually focus on high-level features and
ignore other features. As the result, the stylized images distribute style
elements evenly throughout the image and make the overall image structure
unrecognizable. To solve this problem, we introduce a novel arbitrary style
transfer method with structure enhancement by combining the global and local
loss. The local structure details are represented by Lapstyle and the global
structure is controlled by the image depth. Experimental results demonstrate
that our method can generate higher-quality images with impressive visual
effects on several common datasets, comparing with other state-of-the-art
methods.
Related papers
- StyleBrush: Style Extraction and Transfer from a Single Image [19.652575295703485]
Stylization for visual content aims to add specific style patterns at the pixel level while preserving the original structural features.
We propose StyleBrush, a method that accurately captures styles from a reference image and brushes'' the extracted style onto other input visual content.
arXiv Detail & Related papers (2024-08-18T14:27:20Z) - Generative AI Model for Artistic Style Transfer Using Convolutional
Neural Networks [0.0]
Artistic style transfer involves fusing the content of one image with the artistic style of another to create unique visual compositions.
This paper presents a comprehensive overview of a novel technique for style transfer using Convolutional Neural Networks (CNNs)
arXiv Detail & Related papers (2023-10-27T16:21:17Z) - TSSAT: Two-Stage Statistics-Aware Transformation for Artistic Style
Transfer [22.16475032434281]
Artistic style transfer aims to create new artistic images by rendering a given photograph with the target artistic style.
Existing methods learn styles simply based on global statistics or local patches, lacking careful consideration of the drawing process in practice.
We propose a Two-Stage Statistics-Aware Transformation (TSSAT) module, which first builds the global style foundation by aligning the global statistics of content and style features.
To further enhance both content and style representations, we introduce two novel losses: an attention-based content loss and a patch-based style loss.
arXiv Detail & Related papers (2023-09-12T07:02:13Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - UMFA: A photorealistic style transfer method based on U-Net and
multi-layer feature aggregation [0.0]
We propose a photorealistic style transfer network to emphasize the natural effect of photorealistic image stylization.
In particular, an encoder based on the dense block and a decoder form a symmetrical structure of U-Net are jointly staked to realize an effective feature extraction and image reconstruction.
arXiv Detail & Related papers (2021-08-13T08:06:29Z) - Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality
Artistic Style Transfer [115.13853805292679]
Artistic style transfer aims at migrating the style from an example image to a content image.
Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle)
Our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
arXiv Detail & Related papers (2021-04-12T11:53:53Z) - Learning Portrait Style Representations [34.59633886057044]
We study style representations learned by neural network architectures incorporating higher level characteristics.
We find variation in learned style features from incorporating triplets annotated by art historians as supervision for style similarity.
We also present the first large-scale dataset of portraits prepared for computational analysis.
arXiv Detail & Related papers (2020-12-08T01:36:45Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.