Transforming Facial Weight of Real Images by Editing Latent Space of
StyleGAN
- URL: http://arxiv.org/abs/2011.02606v1
- Date: Thu, 5 Nov 2020 01:45:18 GMT
- Title: Transforming Facial Weight of Real Images by Editing Latent Space of
StyleGAN
- Authors: V N S Rama Krishna Pinnimty, Matt Zhao, Palakorn Achananuparp, and
Ee-Peng Lim
- Abstract summary: We present an invert-and-edit framework to transform facial weight of an input face image to look thinner or heavier by leveraging semantic facial attributes encoded in the latent space of Generative Adversarial Networks (GANs)
Our framework is empirically shown to produce high-quality and realistic facial-weight transformations without requiring training GANs with a large amount of labeled face images from scratch.
Our framework can be utilized as part of an intervention to motivate individuals to make healthier food choices by visualizing the future impacts of their behavior on appearance.
- Score: 9.097538101642192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an invert-and-edit framework to automatically transform facial
weight of an input face image to look thinner or heavier by leveraging semantic
facial attributes encoded in the latent space of Generative Adversarial
Networks (GANs). Using a pre-trained StyleGAN as the underlying generator, we
first employ an optimization-based embedding method to invert the input image
into the StyleGAN latent space. Then, we identify the facial-weight attribute
direction in the latent space via supervised learning and edit the inverted
latent code by moving it positively or negatively along the extracted feature
axis. Our framework is empirically shown to produce high-quality and realistic
facial-weight transformations without requiring training GANs with a large
amount of labeled face images from scratch. Ultimately, our framework can be
utilized as part of an intervention to motivate individuals to make healthier
food choices by visualizing the future impacts of their behavior on appearance.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces [103.54337984566877]
We use dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN without altering any model parameters.
This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions.
We validate our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks.
arXiv Detail & Related papers (2023-03-10T18:59:33Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - FaceFormer: Scale-aware Blind Face Restoration with Transformers [18.514630131883536]
We propose a novel scale-aware blind face restoration framework, named FaceFormer, which formulates facial feature restoration as scale-aware transformation.
Our proposed method trained with synthetic dataset generalizes better to a natural low quality images than current state-of-the-arts.
arXiv Detail & Related papers (2022-07-20T10:08:34Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - High Resolution Face Editing with Masked GAN Latent Code Optimization [0.0]
Face editing is a popular research topic in the computer vision community.
Recent proposed methods are based on either training a conditional encoder-decoder Generative Adversarial Network (GAN) in an end-to-end fashion or on defining an operation in the latent space of a pre-trained vanilla GAN generator model.
We propose a GAN embedding optimization procedure with spatial and semantic constraints.
arXiv Detail & Related papers (2021-03-20T08:39:41Z) - Only a Matter of Style: Age Transformation Using a Style-Based
Regression Model [46.48263482909809]
We present an image-to-image translation method that learns to encode real facial images into the latent space of a pre-trained unconditional GAN.
We employ a pre-trained age regression network used to explicitly guide the encoder in generating the latent codes corresponding to the desired age.
arXiv Detail & Related papers (2021-02-04T17:33:28Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.