Controllable Radiance Fields for Dynamic Face Synthesis
- URL: http://arxiv.org/abs/2210.05825v1
- Date: Tue, 11 Oct 2022 23:17:31 GMT
- Title: Controllable Radiance Fields for Dynamic Face Synthesis
- Authors: Peiye Zhuang, Liqian Ma, Oluwasanmi Koyejo, Alexander G. Schwing
- Abstract summary: We study how to explicitly control generative model synthesis of face dynamics exhibiting non-rigid motion.
Controllable Radiance Field (CoRF)
On head image/video data we show that CoRFs are 3D-aware while enabling editing of identity, viewing directions, and motion.
- Score: 125.48602100893845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work on 3D-aware image synthesis has achieved compelling results using
advances in neural rendering. However, 3D-aware synthesis of face dynamics
hasn't received much attention. Here, we study how to explicitly control
generative model synthesis of face dynamics exhibiting non-rigid motion (e.g.,
facial expression change), while simultaneously ensuring 3D-awareness. For this
we propose a Controllable Radiance Field (CoRF): 1) Motion control is achieved
by embedding motion features within the layered latent motion space of a
style-based generator; 2) To ensure consistency of background, motion features
and subject-specific attributes such as lighting, texture, shapes, albedo, and
identity, a face parsing net, a head regressor and an identity encoder are
incorporated. On head image/video data we show that CoRFs are 3D-aware while
enabling editing of identity, viewing directions, and motion.
Related papers
- G3FA: Geometry-guided GAN for Face Animation [14.488117084637631]
We introduce Geometry-guided GAN for Face Animation (G3FA) to tackle this limitation.
Our novel approach empowers the face animation model to incorporate 3D information using only 2D images.
In our face reenactment model, we leverage 2D motion warping to capture motion dynamics.
arXiv Detail & Related papers (2024-08-23T13:13:24Z) - OmniAvatar: Geometry-Guided Controllable 3D Head Synthesis [81.70922087960271]
We present OmniAvatar, a novel geometry-guided 3D head synthesis model trained from in-the-wild unstructured images.
Our model can synthesize more preferable identity-preserved 3D heads with compelling dynamic details compared to the state-of-the-art methods.
arXiv Detail & Related papers (2023-03-27T18:36:53Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - 3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling [111.98096975078158]
We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field.
We show that this model can accurately be fit to "in-the-wild" facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.
arXiv Detail & Related papers (2022-09-15T15:28:45Z) - Controllable 3D Generative Adversarial Face Model via Disentangling
Shape and Appearance [63.13801759915835]
3D face modeling has been an active area of research in computer vision and computer graphics.
This paper proposes a new 3D face generative model that can decouple identity and expression.
arXiv Detail & Related papers (2022-08-30T13:40:48Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Controllable 3D Face Synthesis with Conditional Generative Occupancy
Fields [40.2714783162419]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF) that effectively enforces the shape of the generated face to commit to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images.
arXiv Detail & Related papers (2022-06-16T17:58:42Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - 3D to 4D Facial Expressions Generation Guided by Landmarks [35.61963927340274]
Given one input 3D neutral face, can we generate dynamic 3D (4D) facial expressions from it?
We first propose a mesh encoder-decoder architecture (Expr-ED) that exploits a set of 3D landmarks to generate an expressive 3D face from its neutral counterpart.
We extend it to 4D by modeling the temporal dynamics of facial expressions using a manifold-valued GAN.
arXiv Detail & Related papers (2021-05-16T15:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.