Controllable 3D Generative Adversarial Face Model via Disentangling
Shape and Appearance
- URL: http://arxiv.org/abs/2208.14263v1
- Date: Tue, 30 Aug 2022 13:40:48 GMT
- Title: Controllable 3D Generative Adversarial Face Model via Disentangling
Shape and Appearance
- Authors: Fariborz Taherkhani, Aashish Rai, Quankai Gao, Shaunak Srivastava,
Xuanbai Chen, Fernando de la Torre, Steven Song, Aayush Prakash, Daeil Kim
- Abstract summary: 3D face modeling has been an active area of research in computer vision and computer graphics.
This paper proposes a new 3D face generative model that can decouple identity and expression.
- Score: 63.13801759915835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D face modeling has been an active area of research in computer vision and
computer graphics, fueling applications ranging from facial expression transfer
in virtual avatars to synthetic data generation. Existing 3D deep learning
generative models (e.g., VAE, GANs) allow generating compact face
representations (both shape and texture) that can model non-linearities in the
shape and appearance space (e.g., scatter effects, specularities, etc.).
However, they lack the capability to control the generation of subtle
expressions. This paper proposes a new 3D face generative model that can
decouple identity and expression and provides granular control over
expressions. In particular, we propose using a pair of supervised auto-encoder
and generative adversarial networks to produce high-quality 3D faces, both in
terms of appearance and shape. Experimental results in the generation of 3D
faces learned with holistic expression labels, or Action Unit labels, show how
we can decouple identity and expression; gaining fine-control over expressions
while preserving identity.
Related papers
- 4D Facial Expression Diffusion Model [3.507793603897647]
We introduce a generative framework for generating 3D facial expression sequences.
It is composed of two tasks: Learning the generative model that is trained over a set of 3D landmark sequences, and Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences.
Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods.
arXiv Detail & Related papers (2023-03-29T11:50:21Z) - 3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models [8.583859530633417]
We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder.
This allows us to generate diverse and high quality 3D surfaces.
arXiv Detail & Related papers (2022-12-01T20:00:00Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars [71.00322191446203]
2D generative models often suffer from undesirable artifacts when rendering images from different camera viewpoints.
Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations.
We propose an animatable 3D-aware GAN for multiview consistent face animation generation.
arXiv Detail & Related papers (2022-10-12T17:59:56Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Controllable 3D Face Synthesis with Conditional Generative Occupancy
Fields [40.2714783162419]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF) that effectively enforces the shape of the generated face to commit to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images.
arXiv Detail & Related papers (2022-06-16T17:58:42Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.