CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer
- URL: http://arxiv.org/abs/2207.04808v2
- Date: Wed, 13 Jul 2022 14:07:03 GMT
- Title: CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer
- Authors: Zijie Wu, Zhen Zhu, Junping Du and Xiang Bai
- Abstract summary: We devise a universally versatile style transfer method capable of performing artistic, photo-realistic, and video style transfer jointly.
We make a mild and reasonable assumption that global inconsistency is dominated by local inconsistencies and devise a generic Contrastive Coherence Preserving Loss (CCPL) applied to local patches.
CCPL can preserve the coherence of the content source during style transfer without degrading stylization.
- Score: 58.020470877242865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we aim to devise a universally versatile style transfer method
capable of performing artistic, photo-realistic, and video style transfer
jointly, without seeing videos during training. Previous single-frame methods
assume a strong constraint on the whole image to maintain temporal consistency,
which could be violated in many cases. Instead, we make a mild and reasonable
assumption that global inconsistency is dominated by local inconsistencies and
devise a generic Contrastive Coherence Preserving Loss (CCPL) applied to local
patches. CCPL can preserve the coherence of the content source during style
transfer without degrading stylization. Moreover, it owns a neighbor-regulating
mechanism, resulting in a vast reduction of local distortions and considerable
visual quality improvement. Aside from its superior performance on versatile
style transfer, it can be easily extended to other tasks, such as
image-to-image translation. Besides, to better fuse content and style features,
we propose Simple Covariance Transformation (SCT) to effectively align
second-order statistics of the content feature with the style feature.
Experiments demonstrate the effectiveness of the resulting model for versatile
style transfer, when armed with CCPL.
Related papers
- UniVST: A Unified Framework for Training-free Localized Video Style Transfer [66.69471376934034]
This paper presents UniVST, a unified framework for localized video style transfer.
It operates without the need for training, offering a distinct advantage over existing methods that transfer style across entire videos.
arXiv Detail & Related papers (2024-10-26T05:28:02Z) - Beyond Entropy: Style Transfer Guided Single Image Continual Test-Time
Adaptation [1.6497679785422956]
We present BESTTA, a novel single image continual test-time adaptation method guided by style transfer.
We demonstrate that BESTTA effectively adapts to the continually changing target environment, leveraging only a single image.
Remarkably, despite training only two parameters in a BeIN layer consuming the least memory, BESTTA outperforms existing state-of-the-art methods in terms of performance.
arXiv Detail & Related papers (2023-11-30T06:14:24Z) - Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot
Artistic Style Transfer [83.1333306079676]
In this paper, we devise a novel Transformer model termed as emphMaster specifically for style transfer.
In the proposed model, different Transformer layers share a common group of parameters, which (1) reduces the total number of parameters, (2) leads to more robust training convergence, and (3) is readily to control the degree of stylization.
Experiments demonstrate the superiority of Master under both zero-shot and few-shot style transfer settings.
arXiv Detail & Related papers (2023-04-24T04:46:39Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - ColoristaNet for Photorealistic Video Style Transfer [15.38024996795316]
Photorealistic style transfer aims to transfer the artistic style of an image onto an input image or video while keeping photorealism.
We propose a self-supervised style transfer framework, which contains a style removal part and a style restoration part.
Experiments demonstrate that ColoristaNet can achieve better stylization effects when compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2022-12-19T04:49:26Z) - Bi-level Feature Alignment for Versatile Image Translation and
Manipulation [88.5915443957795]
Generative adversarial networks (GANs) have achieved great success in image translation and manipulation.
High-fidelity image generation with faithful style control remains a grand challenge in computer vision.
This paper presents a versatile image translation and manipulation framework that achieves accurate semantic and style guidance.
arXiv Detail & Related papers (2021-07-07T05:26:29Z) - Arbitrary Video Style Transfer via Multi-Channel Correlation [84.75377967652753]
We propose Multi-Channel Correction network (MCCNet) to fuse exemplar style features and input content features for efficient style transfer.
MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on similarity with content features.
The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures.
arXiv Detail & Related papers (2020-09-17T01:30:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.