RigNeRF: Fully Controllable Neural 3D Portraits
- URL: http://arxiv.org/abs/2206.06481v1
- Date: Mon, 13 Jun 2022 21:28:34 GMT
- Title: RigNeRF: Fully Controllable Neural 3D Portraits
- Authors: ShahRukh Athar, Zexiang Xu, Kalyan Sunkavalli, Eli Shechtman and
Zhixin Shu
- Abstract summary: RigNeRF enables full control of head pose and facial expressions learned from a single portrait video.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
- Score: 52.91370717599413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Volumetric neural rendering methods, such as neural radiance fields (NeRFs),
have enabled photo-realistic novel view synthesis. However, in their standard
form, NeRFs do not support the editing of objects, such as a human head, within
a scene. In this work, we propose RigNeRF, a system that goes beyond just novel
view synthesis and enables full control of head pose and facial expressions
learned from a single portrait video. We model changes in head pose and facial
expressions using a deformation field that is guided by a 3D morphable face
model (3DMM). The 3DMM effectively acts as a prior for RigNeRF that learns to
predict only residuals to the 3DMM deformations and allows us to render novel
(rigid) poses and (non-rigid) expressions that were not present in the input
sequence. Using only a smartphone-captured short video of a subject for
training, we demonstrate the effectiveness of our method on free view synthesis
of a portrait scene with explicit head pose and expression controls. The
project page can be found here:
http://shahrukhathar.github.io/2022/06/06/RigNeRF.html
Related papers
- COLMAP-Free 3D Gaussian Splatting [88.420322646756]
We propose a novel method to perform novel view synthesis without any SfM preprocessing.
We process the input frames in a sequential manner and progressively grow the 3D Gaussians set by taking one input frame at a time.
Our method significantly improves over previous approaches in view synthesis and camera pose estimation under large motion changes.
arXiv Detail & Related papers (2023-12-12T18:39:52Z) - Controllable Dynamic Appearance for Neural 3D Portraits [54.29179484318194]
We propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions.
CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
arXiv Detail & Related papers (2023-09-20T02:24:40Z) - Reconstructing Personalized Semantic Facial NeRF Models From Monocular
Video [27.36067360218281]
We present a novel semantic model for human head defined with neural radiance field.
The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients.
With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes.
arXiv Detail & Related papers (2022-10-12T11:56:52Z) - Multi-NeuS: 3D Head Portraits from Single Image with Neural Implicit
Functions [70.04394678730968]
We present an approach for the reconstruction of 3D human heads from one or few views.
The underlying neural architecture is to learn the objects and to generalize the model.
Our model can fit novel heads on just a hundred videos or one-shot 3D scans.
arXiv Detail & Related papers (2022-09-07T21:09:24Z) - Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head
Synthesis [90.43371339871105]
We propose Dynamic Facial Radiance Fields (DFRF) for few-shot talking head synthesis.
DFRF conditions face radiance field on 2D appearance images to learn the face prior.
Experiments show DFRF can synthesize natural and high-quality audio-driven talking head videos for novel identities with only 40k iterations.
arXiv Detail & Related papers (2022-07-24T16:46:03Z) - Novel View Synthesis for High-fidelity Headshot Scenes [5.33510552066148]
We find that NeRF can render new views while maintaining geometric consistency, but it does not properly maintain skin details, such as moles and pores.
We propose a method to use both NeRF and 3DMM to synthesize a high-fidelity novel view of a scene with a face.
arXiv Detail & Related papers (2022-05-31T08:14:15Z) - PIRenderer: Controllable Portrait Image Generation via Semantic Neural
Rendering [56.762094966235566]
A Portrait Image Neural Renderer is proposed to control the face motions with the parameters of three-dimensional morphable face models.
The proposed model can generate photo-realistic portrait images with accurate movements according to intuitive modifications.
Our model can generate coherent videos with convincing movements from only a single reference image and a driving audio stream.
arXiv Detail & Related papers (2021-09-17T07:24:16Z) - FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face
Animation [37.39945646282971]
This paper presents a neural rendering method for controllable portrait video synthesis.
We leverage the expression space of a 3D morphable face model (3DMM) to represent the distribution of human facial expressions.
We demonstrate the effectiveness of our method on free view synthesis of portrait videos with photorealistic expression controls.
arXiv Detail & Related papers (2021-08-10T20:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.