Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate
- URL: http://arxiv.org/abs/2304.09728v2
- Date: Thu, 20 Apr 2023 04:17:31 GMT
- Title: Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate
- Authors: Songhua Liu, Jingwen Ye, Xinchao Wang
- Abstract summary: Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
- Score: 58.83278629019384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Style transfer aims to render the style of a given image for style reference
to another given image for content reference, and has been widely adopted in
artistic generation and image editing. Existing approaches either apply the
holistic style of the style image in a global manner, or migrate local colors
and textures of the style image to the content counterparts in a pre-defined
way. In either case, only one result can be generated for a specific pair of
content and style images, which therefore lacks flexibility and is hard to
satisfy different users with different preferences. We propose here a novel
strategy termed Any-to-Any Style Transfer to address this drawback, which
enables users to interactively select styles of regions in the style image and
apply them to the prescribed content regions. In this way, personalizable style
transfer is achieved through human-computer interaction. At the heart of our
approach lies in (1) a region segmentation module based on Segment Anything,
which supports region selection with only some clicks or drawing on images and
thus takes user inputs conveniently and flexibly; (2) and an attention fusion
module, which converts inputs from users to controlling signals for the style
transfer model. Experiments demonstrate the effectiveness for personalizable
style transfer. Notably, our approach performs in a plug-and-play manner
portable to any style transfer method and enhance the controllablity. Our code
is available \href{https://github.com/Huage001/Transfer-Any-Style}{here}.
Related papers
- FAGStyle: Feature Augmentation on Geodesic Surface for Zero-shot Text-guided Diffusion Image Style Transfer [2.3293561091456283]
The goal of image style transfer is to render an image guided by a style reference while maintaining the original content.
We introduce FAGStyle, a zero-shot text-guided diffusion image style transfer method.
Our approach enhances inter-patch information interaction by incorporating the Sliding Window Crop technique.
arXiv Detail & Related papers (2024-08-20T04:20:11Z) - LEAST: "Local" text-conditioned image style transfer [2.47996065798589]
Text-conditioned style transfer enables users to communicate their desired artistic styles through text descriptions.
We evaluate the text-conditioned image editing and style transfer techniques on their fine-grained understanding of user prompts for precise "local" style transfer.
arXiv Detail & Related papers (2024-05-25T19:06:17Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Interactive Style Transfer: All is Your Palette [74.06681967115594]
We propose a drawing-like interactive style transfer (IST) method, by which users can interactively create a harmonious-style image.
Our IST method can serve as a brush, dip style from anywhere, and then paint to any region of the target content image.
arXiv Detail & Related papers (2022-03-25T06:38:46Z) - CAMS: Color-Aware Multi-Style Transfer [46.550390398057985]
Style transfer aims to manipulate the appearance of a source image, or "content" image, to share similar texture and colors of a target "style" image.
A commonly used approach to assist in transferring styles is based on Gram matrix optimization.
We propose a color-aware multi-style transfer method that generates aesthetically pleasing results while preserving the style-color correlation between style and generated images.
arXiv Detail & Related papers (2021-06-26T01:15:09Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.