Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
- URL: http://arxiv.org/abs/2203.13248v1
- Date: Thu, 24 Mar 2022 17:57:11 GMT
- Title: Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
- Authors: Shuai Yang, Liming Jiang, Ziwei Liu, Chen Change Loy
- Abstract summary: Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data.
We introduce a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain.
Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.
- Score: 103.54337984566877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies on StyleGAN show high performance on artistic portrait
generation by transfer learning with limited data. In this paper, we explore
more challenging exemplar-based high-resolution portrait style transfer by
introducing a novel DualStyleGAN with flexible control of dual styles of the
original face domain and the extended artistic portrait domain. Different from
StyleGAN, DualStyleGAN provides a natural way of style transfer by
characterizing the content and style of a portrait with an intrinsic style path
and a new extrinsic style path, respectively. The delicately designed extrinsic
style path enables our model to modulate both the color and complex structural
styles hierarchically to precisely pastiche the style example. Furthermore, a
novel progressive fine-tuning scheme is introduced to smoothly transform the
generative space of the model to the target domain, even with the above
modifications on the network architecture. Experiments demonstrate the
superiority of DualStyleGAN over state-of-the-art methods in high-quality
portrait style transfer and flexible style control.
Related papers
- PS-StyleGAN: Illustrative Portrait Sketching using Attention-Based Style Adaptation [0.0]
Portrait sketching involves capturing identity specific attributes of a real face with abstract lines and shades.
This paper introduces textbfPortrait Sketching StyleGAN (PS-StyleGAN), a style transfer approach tailored for portrait sketch synthesis.
We leverage the semantic $W+$ latent space of StyleGAN to generate portrait sketches, allowing us to make meaningful edits, like pose and expression alterations, without compromising identity.
arXiv Detail & Related papers (2024-08-31T04:22:45Z) - Style Aligned Image Generation via Shared Attention [61.121465570763085]
We introduce StyleAligned, a technique designed to establish style alignment among a series of generated images.
By employing minimal attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models.
Our method's evaluation across diverse styles and text prompts demonstrates high-quality and fidelity.
arXiv Detail & Related papers (2023-12-04T18:55:35Z) - DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer [27.39248034592382]
We propose using a new class of models to perform style transfer while enabling deformable style transfer.
We show how leveraging the priors of these models can expose new artistic controls at inference time.
arXiv Detail & Related papers (2023-07-09T12:13:43Z) - Master: Meta Style Transformer for Controllable Zero-Shot and Few-Shot
Artistic Style Transfer [83.1333306079676]
In this paper, we devise a novel Transformer model termed as emphMaster specifically for style transfer.
In the proposed model, different Transformer layers share a common group of parameters, which (1) reduces the total number of parameters, (2) leads to more robust training convergence, and (3) is readily to control the degree of stylization.
Experiments demonstrate the superiority of Master under both zero-shot and few-shot style transfer settings.
arXiv Detail & Related papers (2023-04-24T04:46:39Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Neural Artistic Style Transfer with Conditional Adversaria [0.0]
A neural artistic style transformation model can modify the appearance of a simple image by adding the style of a famous image.
In this paper, we present two methods that step toward the style image independent neural style transfer model.
Our novel contribution is a unidirectional-GAN model that ensures the Cyclic consistency by the model architecture.
arXiv Detail & Related papers (2023-02-08T04:34:20Z) - VToonify: Controllable High-Resolution Portrait Video Style Transfer [103.54337984566877]
We introduce a novel VToonify framework for controllable high-resolution portrait video style transfer.
We leverage the mid- and high-resolution layers of StyleGAN to render artistic portraits based on the multi-scale content features extracted by an encoder.
Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity.
arXiv Detail & Related papers (2022-09-22T17:59:10Z) - DrawingInStyles: Portrait Image Generation and Editing with Spatially
Conditioned StyleGAN [30.465955123686335]
We introduce SC-StyleGAN, which injects spatial constraints to the original StyleGAN generation process.
Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images.
arXiv Detail & Related papers (2022-03-05T14:54:07Z) - Anisotropic Stroke Control for Multiple Artists Style Transfer [36.92721585146738]
Stroke Control Multi-Artist Style Transfer framework is developed.
Anisotropic Stroke Module (ASM) endows the network with the ability of adaptive semantic-consistency among various styles.
In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue.
arXiv Detail & Related papers (2020-10-16T05:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.