FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face
Animation
- URL: http://arxiv.org/abs/2108.04913v1
- Date: Tue, 10 Aug 2021 20:41:15 GMT
- Title: FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face
Animation
- Authors: ShahRukh Athar, Zhixin Shu, Dimitris Samaras
- Abstract summary: This paper presents a neural rendering method for controllable portrait video synthesis.
We leverage the expression space of a 3D morphable face model (3DMM) to represent the distribution of human facial expressions.
We demonstrate the effectiveness of our method on free view synthesis of portrait videos with photorealistic expression controls.
- Score: 37.39945646282971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a neural rendering method for controllable portrait video
synthesis. Recent advances in volumetric neural rendering, such as neural
radiance fields (NeRF), has enabled the photorealistic novel view synthesis of
static scenes with impressive results. However, modeling dynamic and
controllable objects as part of a scene with such scene representations is
still challenging. In this work, we design a system that enables both novel
view synthesis for portrait video, including the human subject and the scene
background, and explicit control of the facial expressions through a
low-dimensional expression representation. We leverage the expression space of
a 3D morphable face model (3DMM) to represent the distribution of human facial
expressions, and use it to condition the NeRF volumetric function. Furthermore,
we impose a spatial prior brought by 3DMM fitting to guide the network to learn
disentangled control for scene appearance and facial actions. We demonstrate
the effectiveness of our method on free view synthesis of portrait videos with
expression controls. To train a scene, our method only requires a short video
of a subject captured by a mobile device.
Related papers
- Controllable Dynamic Appearance for Neural 3D Portraits [54.29179484318194]
We propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions.
CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
arXiv Detail & Related papers (2023-09-20T02:24:40Z) - Pose-Controllable 3D Facial Animation Synthesis using Hierarchical
Audio-Vertex Attention [52.63080543011595]
A novel pose-controllable 3D facial animation synthesis method is proposed by utilizing hierarchical audio-vertex attention.
The proposed method can produce more realistic facial expressions and head posture movements.
arXiv Detail & Related papers (2023-02-24T09:36:31Z) - CoNFies: Controllable Neural Face Avatars [10.41057307836234]
controllable neural representation for face self-portraits (CoNFies)
We propose a controllable neural representation for face self-portraits (CoNFies)
We use automated facial action recognition (AFAR) to characterize facial expressions as a combination of action units (AU) and their intensities.
arXiv Detail & Related papers (2022-11-16T01:43:43Z) - RigNeRF: Fully Controllable Neural 3D Portraits [52.91370717599413]
RigNeRF enables full control of head pose and facial expressions learned from a single portrait video.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
arXiv Detail & Related papers (2022-06-13T21:28:34Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Neural Radiance Flow for 4D View Synthesis and Video Processing [59.9116932930108]
We present a method to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images.
Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene.
arXiv Detail & Related papers (2020-12-17T17:54:32Z) - Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar
Reconstruction [9.747648609960185]
We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face.
Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required.
arXiv Detail & Related papers (2020-12-05T16:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.