Region-controlled Style Transfer
- URL: http://arxiv.org/abs/2310.15658v1
- Date: Tue, 24 Oct 2023 09:11:34 GMT
- Title: Region-controlled Style Transfer
- Authors: Junjie Kang, Jinsong Wu, Shiqi Jiang
- Abstract summary: We propose a training method that uses a loss function to constrain the style intensity in different regions.
This method guides the transfer strength of style features in different regions based on the gradient relationship between style and content images.
We also introduce a novel feature fusion method that linearly transforms content features to resemble style features while preserving their semantic relationships.
- Score: 3.588126599266807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image style transfer is a challenging task in computational vision. Existing
algorithms transfer the color and texture of style images by controlling the
neural network's feature layers. However, they fail to control the strength of
textures in different regions of the content image. To address this issue, we
propose a training method that uses a loss function to constrain the style
intensity in different regions. This method guides the transfer strength of
style features in different regions based on the gradient relationship between
style and content images. Additionally, we introduce a novel feature fusion
method that linearly transforms content features to resemble style features
while preserving their semantic relationships. Extensive experiments have
demonstrated the effectiveness of our proposed approach.
Related papers
- Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - Retinex-guided Channel-grouping based Patch Swap for Arbitrary Style
Transfer [54.25418866649519]
The basic principle of the patch-matching based style transfer is to substitute the patches of the content image feature maps by the closest patches from the style image feature maps.
Existing techniques treat the full-channel style feature patches as simple signal tensors and create new style feature patches via signal-level fusion.
We propose a Retinex theory guided, channel-grouping based patch swap technique to solve the above challenges.
arXiv Detail & Related papers (2023-09-19T11:13:56Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z) - Parameter-Free Style Projection for Arbitrary Style Transfer [64.06126075460722]
This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
arXiv Detail & Related papers (2020-03-17T13:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.