Arbitrary Video Style Transfer via Multi-Channel Correlation
- URL: http://arxiv.org/abs/2009.08003v2
- Date: Wed, 20 Jan 2021 03:22:05 GMT
- Title: Arbitrary Video Style Transfer via Multi-Channel Correlation
- Authors: Yingying Deng, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma,
Changsheng Xu
- Abstract summary: We propose Multi-Channel Correction network (MCCNet) to fuse exemplar style features and input content features for efficient style transfer.
MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on similarity with content features.
The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures.
- Score: 84.75377967652753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video style transfer is getting more attention in AI community for its
numerous applications such as augmented reality and animation productions.
Compared with traditional image style transfer, performing this task on video
presents new challenges: how to effectively generate satisfactory stylized
results for any specified style, and maintain temporal coherence across frames
at the same time. Towards this end, we propose Multi-Channel Correction network
(MCCNet), which can be trained to fuse the exemplar style features and input
content features for efficient style transfer while naturally maintaining the
coherence of input videos. Specifically, MCCNet works directly on the feature
space of style and content domain where it learns to rearrange and fuse style
features based on their similarity with content features. The outputs generated
by MCC are features containing the desired style patterns which can further be
decoded into images with vivid style textures. Moreover, MCCNet is also
designed to explicitly align the features to input which ensures the output
maintains the content structures as well as the temporal continuity. To further
improve the performance of MCCNet under complex light conditions, we also
introduce the illumination loss during training. Qualitative and quantitative
evaluations demonstrate that MCCNet performs well in both arbitrary video and
image style transfer tasks.
Related papers
- UniVST: A Unified Framework for Training-free Localized Video Style Transfer [66.69471376934034]
This paper presents UniVST, a unified framework for localized video style transfer.
It operates without the need for training, offering a distinct advantage over existing methods that transfer style across entire videos.
arXiv Detail & Related papers (2024-10-26T05:28:02Z) - Puff-Net: Efficient Style Transfer with Pure Content and Style Feature Fusion Network [32.12413686394824]
Style transfer aims to render an image with the artistic features of a style image, while maintaining the original structure.
It is difficult for CNN-based methods to handle global information and long-range dependencies between input images.
We propose a novel network termed Puff-Net, i.e., pure content and style feature fusion network.
arXiv Detail & Related papers (2024-05-30T07:41:07Z) - Rethink Arbitrary Style Transfer with Transformer and Contrastive Learning [11.900404048019594]
In this paper, we introduce an innovative technique to improve the quality of stylized images.
Firstly, we propose Style Consistency Instance Normalization (SCIN), a method to refine the alignment between content and style features.
In addition, we have developed an Instance-based Contrastive Learning (ICL) approach designed to understand relationships among various styles.
arXiv Detail & Related papers (2024-04-21T08:52:22Z) - Line Search-Based Feature Transformation for Fast, Stable, and Tunable
Content-Style Control in Photorealistic Style Transfer [26.657485176782934]
Photorealistic style transfer is the task of synthesizing a realistic-looking image when adapting the content from one image to appear in the style of another image.
Modern models embed a transformation that fuses features describing the content image and style image and then decodes the resulting feature into a stylized image.
We introduce a general-purpose transformation that enables controlling the balance between how much content is preserved and the strength of the infused style.
arXiv Detail & Related papers (2022-10-12T08:05:49Z) - CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer [58.020470877242865]
We devise a universally versatile style transfer method capable of performing artistic, photo-realistic, and video style transfer jointly.
We make a mild and reasonable assumption that global inconsistency is dominated by local inconsistencies and devise a generic Contrastive Coherence Preserving Loss (CCPL) applied to local patches.
CCPL can preserve the coherence of the content source during style transfer without degrading stylization.
arXiv Detail & Related papers (2022-07-11T12:09:41Z) - StyTr^2: Unbiased Image Style Transfer with Transformers [59.34108877969477]
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
Traditional neural style transfer methods are usually biased and content leak can be observed by running several times of the style transfer process with the same reference image.
We propose a transformer-based approach, namely StyTr2, to address this critical issue.
arXiv Detail & Related papers (2021-05-30T15:57:09Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z) - Parameter-Free Style Projection for Arbitrary Style Transfer [64.06126075460722]
This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
arXiv Detail & Related papers (2020-03-17T13:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.