Multiple Style Transfer via Variational AutoEncoder
- URL: http://arxiv.org/abs/2110.07375v1
- Date: Wed, 13 Oct 2021 06:47:13 GMT
- Title: Multiple Style Transfer via Variational AutoEncoder
- Authors: Zhi-Song Liu and Vicky Kalogeiton and Marie-Paule Cani
- Abstract summary: We propose ST-VAE, a Variational AutoEncoder for latent space-based style transfer.
It performs multiple style transfer by projecting nonlinear styles to a linear latent space, enabling to merge styles via linear before transferring the new style to the content image.
- Score: 16.797476504327665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern works on style transfer focus on transferring style from a single
image. Recently, some approaches study multiple style transfer; these, however,
are either too slow or fail to mix multiple styles. We propose ST-VAE, a
Variational AutoEncoder for latent space-based style transfer. It performs
multiple style transfer by projecting nonlinear styles to a linear latent
space, enabling to merge styles via linear interpolation before transferring
the new style to the content image. To evaluate ST-VAE, we experiment on COCO
for single and multiple style transfer. We also present a case study revealing
that ST-VAE outperforms other methods while being faster, flexible, and setting
a new path for multiple style transfer.
Related papers
- Inversion-Free Style Transfer with Dual Rectified Flows [57.02757226679549]
We propose a novel textitinversion-free style transfer framework based on dual rectified flows.<n>Our approach predicts content and style trajectories in parallel, then fuses them through a dynamic midpoint.<n>Experiments demonstrate generalization across diverse styles and content, providing an effective and efficient pipeline for style transfer.
arXiv Detail & Related papers (2025-11-26T02:28:51Z) - Pluggable Style Representation Learning for Multi-Style Transfer [41.09041735653436]
We develop a style transfer framework by decoupling the style modeling and transferring.
For style modeling, we propose a style representation learning scheme to encode the style information into a compact representation.
For style transferring, we develop a style-aware multi-style transfer network (SaMST) to adapt to diverse styles using pluggable style representations.
arXiv Detail & Related papers (2025-03-26T09:44:40Z) - Multimodality-guided Image Style Transfer using Cross-modal GAN
Inversion [42.345533741985626]
We present a novel method to achieve much improved style transfer based on text guidance.
Our method allows style inputs from multiple sources and modalities, enabling MultiModality-guided Image Style Transfer (MMIST)
Specifically, we realize MMIST with a novel cross-modal GAN inversion method, which generates style representations consistent with specified styles.
arXiv Detail & Related papers (2023-12-04T06:38:23Z) - STEER: Unified Style Transfer with Expert Reinforcement [71.3995732115262]
STEER: Unified Style Transfer with Expert Reinforcement, is a unified frame-work developed to overcome the challenge of limited parallel data for style transfer.
We show STEER is robust, maintaining its style transfer capabilities on out-of-domain data, and surpassing nearly all baselines across various styles.
arXiv Detail & Related papers (2023-11-13T09:02:30Z) - Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot
Artistic Style Transfer [83.1333306079676]
In this paper, we devise a novel Transformer model termed as emphMaster specifically for style transfer.
In the proposed model, different Transformer layers share a common group of parameters, which (1) reduces the total number of parameters, (2) leads to more robust training convergence, and (3) is readily to control the degree of stylization.
Experiments demonstrate the superiority of Master under both zero-shot and few-shot style transfer settings.
arXiv Detail & Related papers (2023-04-24T04:46:39Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer [103.54337984566877]
Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data.
We introduce a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain.
Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.
arXiv Detail & Related papers (2022-03-24T17:57:11Z) - Deep Feature Rotation for Multimodal Image Style Transfer [0.0]
We propose a simple method for representing style features in many ways called Deep Feature Rotation (DFR)
Our approach is representative of the many ways of augmentation for intermediate feature embedding without consuming too much computational expense.
arXiv Detail & Related papers (2022-02-09T12:36:24Z) - Anisotropic Stroke Control for Multiple Artists Style Transfer [36.92721585146738]
Stroke Control Multi-Artist Style Transfer framework is developed.
Anisotropic Stroke Module (ASM) endows the network with the ability of adaptive semantic-consistency among various styles.
In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue.
arXiv Detail & Related papers (2020-10-16T05:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.