Face Identity-Aware Disentanglement in StyleGAN
- URL: http://arxiv.org/abs/2309.12033v1
- Date: Thu, 21 Sep 2023 12:54:09 GMT
- Title: Face Identity-Aware Disentanglement in StyleGAN
- Authors: Adrian Suwa{\l}a, Bartosz W\'ojcik, Magdalena Proszewska, Jacek Tabor,
Przemys{\l}aw Spurek, Marek \'Smieja
- Abstract summary: We introduce PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles face attributes from a person's identity.
Our experiments demonstrate that the modifications of face attributes performed by PluGeN4Faces are significantly less invasive on the remaining characteristics of the image than in the existing state-of-the-art models.
- Score: 15.753131748318335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conditional GANs are frequently used for manipulating the attributes of face
images, such as expression, hairstyle, pose, or age. Even though the
state-of-the-art models successfully modify the requested attributes, they
simultaneously modify other important characteristics of the image, such as a
person's identity. In this paper, we focus on solving this problem by
introducing PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles
face attributes from a person's identity. Our key idea is to perform training
on images retrieved from movie frames, where a given person appears in various
poses and with different attributes. By applying a type of contrastive loss, we
encourage the model to group images of the same person in similar regions of
latent space. Our experiments demonstrate that the modifications of face
attributes performed by PluGeN4Faces are significantly less invasive on the
remaining characteristics of the image than in the existing state-of-the-art
models.
Related papers
- FlashFace: Human Image Personalization with High-fidelity Identity Preservation [59.76645602354481]
FlashFace allows users to easily personalize their own photos by providing one or a few reference face images and a text prompt.
Our approach is distinguishable from existing human photo customization methods by higher-fidelity identity preservation and better instruction following.
arXiv Detail & Related papers (2024-03-25T17:59:57Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - Subjective Face Transform using Human First Impressions [5.026535087391025]
This work uses generative models to find semantically meaningful edits to a face image that change perceived attributes.
We train on real and synthetic faces, evaluate for in-domain and out-of-domain images using predictive models and human ratings.
arXiv Detail & Related papers (2023-09-27T03:21:07Z) - DreamIdentity: Improved Editability for Efficient Face-identity
Preserved Image Generation [69.16517915592063]
We propose a novel face-identity encoder to learn an accurate representation of human faces.
We also propose self-augmented editability learning to enhance the editability of models.
Our methods can generate identity-preserved images under different scenes at a much faster speed.
arXiv Detail & Related papers (2023-07-01T11:01:17Z) - StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face
Reenactment [47.27033282706179]
We propose a framework that learns to disentangle the identity characteristics of the face from its pose.
We show that the proposed method produces higher quality results even on extreme pose variations.
arXiv Detail & Related papers (2022-09-27T13:22:35Z) - Explaining Bias in Deep Face Recognition via Image Characteristics [9.569575076277523]
We evaluate ten state-of-the-art face recognition models, comparing their fairness in terms of security and usability on two data sets.
We then analyze the impact of image characteristics on models performance.
arXiv Detail & Related papers (2022-08-23T17:18:23Z) - Graph-based Generative Face Anonymisation with Pose Preservation [49.18049578591058]
AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
arXiv Detail & Related papers (2021-12-10T12:58:17Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Face Age Progression With Attribute Manipulation [11.859913430860335]
We propose a novel holistic model in this regard viz., Face Age progression With Attribute Manipulation (FAWAM)"
We address the task in a bottom-up manner, as two submodules i.e. face age progression and face attribute manipulation.
For face aging, we use an attribute-conscious face aging model with a pyramidal generative adversarial network that can model age-specific facial changes.
arXiv Detail & Related papers (2021-06-14T18:26:48Z) - VAE/WGAN-Based Image Representation Learning For Pose-Preserving
Seamless Identity Replacement In Facial Images [15.855376604558977]
We present a novel variational generative adversarial network (VGAN) based on Wasserstein loss.
We show that our network can be used to perform pose-preserving identity morphing and identity-preserving pose morphing.
arXiv Detail & Related papers (2020-03-02T03:35:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.