GIF: Generative Interpretable Faces
- URL: http://arxiv.org/abs/2009.00149v2
- Date: Wed, 25 Nov 2020 13:37:01 GMT
- Title: GIF: Generative Interpretable Faces
- Authors: Partha Ghosh, Pravir Singh Gupta, Roy Uziel, Anurag Ranjan, Michael
Black, Timo Bolkart
- Abstract summary: 3D face modeling methods provide parametric control but generate unrealistic images.
Recent methods gain partial control, either by attempting to disentangle different factors in an unsupervised manner, or by adding control post hoc to a pre-trained model.
We condition our generative model on pre-defined control parameters to encourage disentanglement in the generation process.
- Score: 16.573491296543196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photo-realistic visualization and animation of expressive human faces have
been a long standing challenge. 3D face modeling methods provide parametric
control but generates unrealistic images, on the other hand, generative 2D
models like GANs (Generative Adversarial Networks) output photo-realistic face
images, but lack explicit control. Recent methods gain partial control, either
by attempting to disentangle different factors in an unsupervised manner, or by
adding control post hoc to a pre-trained model. Unconditional GANs, however,
may entangle factors that are hard to undo later. We condition our generative
model on pre-defined control parameters to encourage disentanglement in the
generation process. Specifically, we condition StyleGAN2 on FLAME, a generative
3D face model. While conditioning on FLAME parameters yields unsatisfactory
results, we find that conditioning on rendered FLAME geometry and photometric
details works well. This gives us a generative 2D face model named GIF
(Generative Interpretable Faces) that offers FLAME's parametric control. Here,
interpretable refers to the semantic meaning of different parameters. Given
FLAME parameters for shape, pose, expressions, parameters for appearance,
lighting, and an additional style vector, GIF outputs photo-realistic face
images. We perform an AMT based perceptual study to quantitatively and
qualitatively evaluate how well GIF follows its conditioning. The code, data,
and trained model are publicly available for research purposes at
http://gif.is.tue.mpg.de.
Related papers
- CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Explicitly Controllable 3D-Aware Portrait Generation [42.30481422714532]
We propose a 3D portrait generation network that produces consistent portraits according to semantic parameters regarding pose, identity, expression and lighting.
Our method outperforms prior arts in extensive experiments, producing realistic portraits with vivid expression in natural lighting when viewed in free viewpoint.
arXiv Detail & Related papers (2022-09-12T17:40:08Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - PIRenderer: Controllable Portrait Image Generation via Semantic Neural
Rendering [56.762094966235566]
A Portrait Image Neural Renderer is proposed to control the face motions with the parameters of three-dimensional morphable face models.
The proposed model can generate photo-realistic portrait images with accurate movements according to intuitive modifications.
Our model can generate coherent videos with convincing movements from only a single reference image and a driving audio stream.
arXiv Detail & Related papers (2021-09-17T07:24:16Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Lifting 2D StyleGAN for 3D-Aware Face Generation [52.8152883980813]
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation.
Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for synthetic images.
arXiv Detail & Related papers (2020-11-26T05:02:09Z) - StyleRig: Rigging StyleGAN for 3D Control over Portrait Images [81.43265493604302]
StyleGAN generates portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background)
StyleGAN lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination.
We present the first method to provide a face rig-like control over a pretrained and fixed StyleGAN via a 3DMM.
arXiv Detail & Related papers (2020-03-31T21:20:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.