Parameter-Free Style Projection for Arbitrary Style Transfer
- URL: http://arxiv.org/abs/2003.07694v2
- Date: Tue, 8 Feb 2022 16:54:20 GMT
- Title: Parameter-Free Style Projection for Arbitrary Style Transfer
- Authors: Siyu Huang, Haoyi Xiong, Tianyang Wang, Bihan Wen, Qingzhong Wang,
Zeyu Chen, Jun Huan, Dejing Dou
- Abstract summary: This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
- Score: 64.06126075460722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Arbitrary image style transfer is a challenging task which aims to stylize a
content image conditioned on arbitrary style images. In this task the
feature-level content-style transformation plays a vital role for proper fusion
of features. Existing feature transformation algorithms often suffer from loss
of content or style details, non-natural stroke patterns, and unstable
training. To mitigate these issues, this paper proposes a new feature-level
style transformation technique, named Style Projection, for parameter-free,
fast, and effective content-style transformation. This paper further presents a
real-time feed-forward model to leverage Style Projection for arbitrary image
style transfer, which includes a regularization term for matching the semantics
between input contents and stylized outputs. Extensive qualitative analysis,
quantitative evaluation, and user study have demonstrated the effectiveness and
efficiency of the proposed methods.
Related papers
- ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Rethink Arbitrary Style Transfer with Transformer and Contrastive Learning [11.900404048019594]
In this paper, we introduce an innovative technique to improve the quality of stylized images.
Firstly, we propose Style Consistency Instance Normalization (SCIN), a method to refine the alignment between content and style features.
In addition, we have developed an Instance-based Contrastive Learning (ICL) approach designed to understand relationships among various styles.
arXiv Detail & Related papers (2024-04-21T08:52:22Z) - Style Aligned Image Generation via Shared Attention [61.121465570763085]
We introduce StyleAligned, a technique designed to establish style alignment among a series of generated images.
By employing minimal attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models.
Our method's evaluation across diverse styles and text prompts demonstrates high-quality and fidelity.
arXiv Detail & Related papers (2023-12-04T18:55:35Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Line Search-Based Feature Transformation for Fast, Stable, and Tunable
Content-Style Control in Photorealistic Style Transfer [26.657485176782934]
Photorealistic style transfer is the task of synthesizing a realistic-looking image when adapting the content from one image to appear in the style of another image.
Modern models embed a transformation that fuses features describing the content image and style image and then decodes the resulting feature into a stylized image.
We introduce a general-purpose transformation that enables controlling the balance between how much content is preserved and the strength of the infused style.
arXiv Detail & Related papers (2022-10-12T08:05:49Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - STALP: Style Transfer with Auxiliary Limited Pairing [36.23393954839379]
We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart.
We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images.
arXiv Detail & Related papers (2021-10-20T11:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.