Controllable Dynamic Appearance for Neural 3D Portraits
- URL: http://arxiv.org/abs/2309.11009v2
- Date: Thu, 21 Sep 2023 17:35:14 GMT
- Title: Controllable Dynamic Appearance for Neural 3D Portraits
- Authors: ShahRukh Athar, Zhixin Shu, Zexiang Xu, Fujun Luan, Sai Bi, Kalyan
Sunkavalli and Dimitris Samaras
- Abstract summary: We propose CoDyNeRF, a system that enables the creation of fully controllable 3D portraits in real-world capture conditions.
CoDyNeRF learns to approximate illumination dependent effects via a dynamic appearance model.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
- Score: 54.29179484318194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in Neural Radiance Fields (NeRFs) have made it possible to
reconstruct and reanimate dynamic portrait scenes with control over head-pose,
facial expressions and viewing direction. However, training such models assumes
photometric consistency over the deformed region e.g. the face must be evenly
lit as it deforms with changing head-pose and facial expression. Such
photometric consistency across frames of a video is hard to maintain, even in
studio environments, thus making the created reanimatable neural portraits
prone to artifacts during reanimation. In this work, we propose CoDyNeRF, a
system that enables the creation of fully controllable 3D portraits in
real-world capture conditions. CoDyNeRF learns to approximate illumination
dependent effects via a dynamic appearance model in the canonical space that is
conditioned on predicted surface normals and the facial expressions and
head-pose deformations. The surface normals prediction is guided using 3DMM
normals that act as a coarse prior for the normals of the human head, where
direct prediction of normals is hard due to rigid and non-rigid deformations
induced by head-pose and facial expression changes. Using only a
smartphone-captured short video of a subject for training, we demonstrate the
effectiveness of our method on free view synthesis of a portrait scene with
explicit head pose and expression controls, and realistic lighting effects. The
project page can be found here:
http://shahrukhathar.github.io/2023/08/22/CoDyNeRF.html
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Explicitly Controllable 3D-Aware Portrait Generation [42.30481422714532]
We propose a 3D portrait generation network that produces consistent portraits according to semantic parameters regarding pose, identity, expression and lighting.
Our method outperforms prior arts in extensive experiments, producing realistic portraits with vivid expression in natural lighting when viewed in free viewpoint.
arXiv Detail & Related papers (2022-09-12T17:40:08Z) - NARRATE: A Normal Assisted Free-View Portrait Stylizer [42.38374601073052]
NARRATE is a novel pipeline that enables simultaneously editing portrait lighting and perspective in a photorealistic manner.
We experimentally demonstrate that NARRATE achieves more photorealistic, reliable results over prior works.
We showcase vivid free-view facial animations as well as 3D-aware relightableization, which help facilitate various AR/VR applications.
arXiv Detail & Related papers (2022-07-03T07:54:05Z) - RigNeRF: Fully Controllable Neural 3D Portraits [52.91370717599413]
RigNeRF enables full control of head pose and facial expressions learned from a single portrait video.
We demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
arXiv Detail & Related papers (2022-06-13T21:28:34Z) - FLAME-in-NeRF : Neural control of Radiance Fields for Free View Face
Animation [37.39945646282971]
This paper presents a neural rendering method for controllable portrait video synthesis.
We leverage the expression space of a 3D morphable face model (3DMM) to represent the distribution of human facial expressions.
We demonstrate the effectiveness of our method on free view synthesis of portrait videos with photorealistic expression controls.
arXiv Detail & Related papers (2021-08-10T20:41:15Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.