In the light of feature distributions: moment matching for Neural Style
Transfer
- URL: http://arxiv.org/abs/2103.07208v1
- Date: Fri, 12 Mar 2021 11:00:44 GMT
- Title: In the light of feature distributions: moment matching for Neural Style
Transfer
- Authors: Nikolai Kalischek, Jan Dirk Wegner, Konrad Schindler
- Abstract summary: Style transfer aims to render the content of a given image in the graphical/artistic style of another image.
We show that most current implementations of that concept have important theoretical and practical limitations.
We propose a novel approach that matches the desired style more precisely, while still being computationally efficient.
- Score: 27.25600860698314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Style transfer aims to render the content of a given image in the
graphical/artistic style of another image. The fundamental concept underlying
NeuralStyle Transfer (NST) is to interpret style as a distribution in the
feature space of a Convolutional Neural Network, such that a desired style can
be achieved by matching its feature distribution. We show that most current
implementations of that concept have important theoretical and practical
limitations, as they only partially align the feature distributions. We propose
a novel approach that matches the distributions more precisely, thus
reproducing the desired style more faithfully, while still being
computationally efficient. Specifically, we adapt the dual form of Central
Moment Discrepancy (CMD), as recently proposed for domain adaptation, to
minimize the difference between the target style and the feature distribution
of the output image. The dual interpretation of this metric explicitly matches
all higher-order centralized moments and is therefore a natural extension of
existing NST methods that only take into account the first and second moments.
Our experiments confirm that the strong theoretical properties also translate
to visually better style transfer, and better disentangle style from semantic
image content.
Related papers
- UniVST: A Unified Framework for Training-free Localized Video Style Transfer [66.69471376934034]
This paper presents UniVST, a unified framework for localized video style transfer.
It operates without the need for training, offering a distinct advantage over existing methods that transfer style across entire videos.
arXiv Detail & Related papers (2024-10-26T05:28:02Z) - DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer [19.355744690301403]
We introduce a novel artistic style transfer method based on a pre-trained large-scale diffusion model without any optimization.
Our experimental results demonstrate that our proposed method surpasses state-of-the-art methods in both conventional and diffusion-based style transfer baselines.
arXiv Detail & Related papers (2023-12-11T09:53:12Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Diffusion-based Image Translation using Disentangled Style and Content
Representation [51.188396199083336]
Diffusion-based image translation guided by semantic texts or a single target image has enabled flexible style transfer.
It is often difficult to maintain the original content of the image during the reverse diffusion.
We present a novel diffusion-based unsupervised image translation method using disentangled style and content representation.
Our experimental results show that the proposed method outperforms state-of-the-art baseline models in both text-guided and image-guided translation tasks.
arXiv Detail & Related papers (2022-09-30T06:44:37Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - Style is a Distribution of Features [2.398608007786179]
Neural style transfer is an image generation technique that uses a convolutional neural network (CNN) to merge the content of one image with the style of another.
We present a new algorithm for style transfer that fully extracts the style from the features by redefining the style loss as the Wasserstein distance between the distribution of features.
arXiv Detail & Related papers (2020-07-25T21:17:51Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.