Face Cartoonisation For Various Poses Using StyleGAN
- URL: http://arxiv.org/abs/2309.14908v1
- Date: Tue, 26 Sep 2023 13:10:25 GMT
- Title: Face Cartoonisation For Various Poses Using StyleGAN
- Authors: Kushal Jain, Ankith Varun J, Anoop Namboodiri
- Abstract summary: This paper presents an innovative approach to achieve face cartoonisation while preserving the original identity and accommodating various poses.
We achieve this by introducing an encoder that captures both pose and identity information from images and generates a corresponding embedding within the StyleGAN latent space.
We show by extensive experimentation how our encoder adapts the StyleGAN output to better preserve identity when the objective is cartoonisation.
- Score: 0.7673339435080445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an innovative approach to achieve face cartoonisation
while preserving the original identity and accommodating various poses. Unlike
previous methods in this field that relied on conditional-GANs, which posed
challenges related to dataset requirements and pose training, our approach
leverages the expressive latent space of StyleGAN. We achieve this by
introducing an encoder that captures both pose and identity information from
images and generates a corresponding embedding within the StyleGAN latent
space. By subsequently passing this embedding through a pre-trained generator,
we obtain the desired cartoonised output. While many other approaches based on
StyleGAN necessitate a dedicated and fine-tuned StyleGAN model, our method
stands out by utilizing an already-trained StyleGAN designed to produce
realistic facial images. We show by extensive experimentation how our encoder
adapts the StyleGAN output to better preserve identity when the objective is
cartoonisation.
Related papers
- PS-StyleGAN: Illustrative Portrait Sketching using Attention-Based Style Adaptation [0.0]
Portrait sketching involves capturing identity specific attributes of a real face with abstract lines and shades.
This paper introduces textbfPortrait Sketching StyleGAN (PS-StyleGAN), a style transfer approach tailored for portrait sketch synthesis.
We leverage the semantic $W+$ latent space of StyleGAN to generate portrait sketches, allowing us to make meaningful edits, like pose and expression alterations, without compromising identity.
arXiv Detail & Related papers (2024-08-31T04:22:45Z) - Style Aligned Image Generation via Shared Attention [61.121465570763085]
We introduce StyleAligned, a technique designed to establish style alignment among a series of generated images.
By employing minimal attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models.
Our method's evaluation across diverse styles and text prompts demonstrates high-quality and fidelity.
arXiv Detail & Related papers (2023-12-04T18:55:35Z) - Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - Customize StyleGAN with One Hand Sketch [0.0]
We propose a framework to control StyleGAN imagery with a single user sketch.
We learn a conditional distribution in the latent space of a pre-trained StyleGAN model via energy-based learning.
Our model can generate multi-modal images semantically aligned with the input sketch.
arXiv Detail & Related papers (2023-10-29T09:32:33Z) - Multi-Modal Face Stylization with a Generative Prior [27.79677001997915]
MMFS supports multi-modal face stylization by leveraging the strengths of StyleGAN.
We introduce a two-stage training strategy, where we train the encoder in the first stage to align the feature maps with StyleGAN.
In the second stage, the entire network is fine-tuned with artistic data for stylized face generation.
arXiv Detail & Related papers (2023-05-29T11:01:31Z) - MODIFY: Model-driven Face Stylization without Style Images [77.24793103549158]
Existing face stylization methods always acquire the presence of the target (style) domain during the translation process.
We propose a new method called MODel-drIven Face stYlization (MODIFY), which relies on the generative model to bypass the dependence of the target images.
Experimental results on several different datasets validate the effectiveness of MODIFY for unsupervised face stylization.
arXiv Detail & Related papers (2023-03-17T08:35:17Z) - DrawingInStyles: Portrait Image Generation and Editing with Spatially
Conditioned StyleGAN [30.465955123686335]
We introduce SC-StyleGAN, which injects spatial constraints to the original StyleGAN generation process.
Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images.
arXiv Detail & Related papers (2022-03-05T14:54:07Z) - Styleverse: Towards Identity Stylization across Heterogeneous Domains [70.13327076710269]
We propose a new challenging task namely IDentity Stylization (IDS) across heterogeneous domains.
We use an effective heterogeneous-network-based framework $Styleverse$ that uses a single domain-aware generator.
$Styleverse achieves higher-fidelity identity stylization compared with other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-02T04:23:01Z) - BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation [9.370501805054344]
We propose BlendGAN for arbitrary stylized face generation.
We first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles.
In addition, a weighted blending module (WBM) is proposed to blend face and style representations implicitly and control the arbitrary stylization effect.
arXiv Detail & Related papers (2021-10-22T12:00:27Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.