Line Search-Based Feature Transformation for Fast, Stable, and Tunable
Content-Style Control in Photorealistic Style Transfer
- URL: http://arxiv.org/abs/2210.05996v1
- Date: Wed, 12 Oct 2022 08:05:49 GMT
- Title: Line Search-Based Feature Transformation for Fast, Stable, and Tunable
Content-Style Control in Photorealistic Style Transfer
- Authors: Tai-Yin Chiu, Danna Gurari
- Abstract summary: Photorealistic style transfer is the task of synthesizing a realistic-looking image when adapting the content from one image to appear in the style of another image.
Modern models embed a transformation that fuses features describing the content image and style image and then decodes the resulting feature into a stylized image.
We introduce a general-purpose transformation that enables controlling the balance between how much content is preserved and the strength of the infused style.
- Score: 26.657485176782934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photorealistic style transfer is the task of synthesizing a realistic-looking
image when adapting the content from one image to appear in the style of
another image. Modern models commonly embed a transformation that fuses
features describing the content image and style image and then decodes the
resulting feature into a stylized image. We introduce a general-purpose
transformation that enables controlling the balance between how much content is
preserved and the strength of the infused style. We offer the first experiments
that demonstrate the performance of existing transformations across different
style transfer models and demonstrate how our transformation performs better in
its ability to simultaneously run fast, produce consistently reasonable
results, and control the balance between content and style in different models.
To support reproducing our method and models, we share the code at
https://github.com/chiutaiyin/LS-FT.
Related papers
- Puff-Net: Efficient Style Transfer with Pure Content and Style Feature Fusion Network [32.12413686394824]
Style transfer aims to render an image with the artistic features of a style image, while maintaining the original structure.
It is difficult for CNN-based methods to handle global information and long-range dependencies between input images.
We propose a novel network termed Puff-Net, i.e., pure content and style feature fusion network.
arXiv Detail & Related papers (2024-05-30T07:41:07Z) - Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot
Artistic Style Transfer [83.1333306079676]
In this paper, we devise a novel Transformer model termed as emphMaster specifically for style transfer.
In the proposed model, different Transformer layers share a common group of parameters, which (1) reduces the total number of parameters, (2) leads to more robust training convergence, and (3) is readily to control the degree of stylization.
Experiments demonstrate the superiority of Master under both zero-shot and few-shot style transfer settings.
arXiv Detail & Related papers (2023-04-24T04:46:39Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - CLIPstyler: Image Style Transfer with a Single Text Condition [34.24876359759408]
Existing neural style transfer methods require reference style images to transfer texture information of style images to content images.
We propose a new framework that enables a style transfer without' a style image, but only with a text description of the desired style.
arXiv Detail & Related papers (2021-12-01T09:48:53Z) - STALP: Style Transfer with Auxiliary Limited Pairing [36.23393954839379]
We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart.
We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images.
arXiv Detail & Related papers (2021-10-20T11:38:41Z) - StyTr^2: Unbiased Image Style Transfer with Transformers [59.34108877969477]
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
Traditional neural style transfer methods are usually biased and content leak can be observed by running several times of the style transfer process with the same reference image.
We propose a transformer-based approach, namely StyTr2, to address this critical issue.
arXiv Detail & Related papers (2021-05-30T15:57:09Z) - Parameter-Free Style Projection for Arbitrary Style Transfer [64.06126075460722]
This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
arXiv Detail & Related papers (2020-03-17T13:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.