High-fidelity Facial Avatar Reconstruction from Monocular Video with
Generative Priors
- URL: http://arxiv.org/abs/2211.15064v1
- Date: Mon, 28 Nov 2022 04:49:46 GMT
- Title: High-fidelity Facial Avatar Reconstruction from Monocular Video with
Generative Priors
- Authors: Yunpeng Bai, Yanbo Fan, Xuan Wang, Yong Zhang, Jingxiang Sun, Chun
Yuan, Ying Shan
- Abstract summary: We propose a new method for NeRF-based facial avatar reconstruction that utilizes 3D-aware generative prior.
Compared with existing works, we obtain superior novel view synthesis results and faithfully face reenactment performance.
- Score: 29.293166730794606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-fidelity facial avatar reconstruction from a monocular video is a
significant research problem in computer graphics and computer vision.
Recently, Neural Radiance Field (NeRF) has shown impressive novel view
rendering results and has been considered for facial avatar reconstruction.
However, the complex facial dynamics and missing 3D information in monocular
videos raise significant challenges for faithful facial reconstruction. In this
work, we propose a new method for NeRF-based facial avatar reconstruction that
utilizes 3D-aware generative prior. Different from existing works that depend
on a conditional deformation field for dynamic modeling, we propose to learn a
personalized generative prior, which is formulated as a local and low
dimensional subspace in the latent space of 3D-GAN. We propose an efficient
method to construct the personalized generative prior based on a small set of
facial images of a given individual. After learning, it allows for
photo-realistic rendering with novel views and the face reenactment can be
realized by performing navigation in the latent space. Our proposed method is
applicable for different driven signals, including RGB images, 3DMM
coefficients, and audios. Compared with existing works, we obtain superior
novel view synthesis results and faithfully face reenactment performance.
Related papers
- GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar [48.21353924040671]
We propose to learn person-specific animatable avatars from images without assuming to have access to precise facial expression tracking.
We learn a mapping from 3DMM facial expression parameters to the latent space of the generative model.
With this scheme, we decouple 3D appearance reconstruction and animation control to achieve high fidelity in image synthesis.
arXiv Detail & Related papers (2023-11-22T19:13:00Z) - NOFA: NeRF-based One-shot Facial Avatar Reconstruction [45.11455702291703]
3D facial avatar reconstruction has been a significant research topic in computer graphics and computer vision.
We propose a one-shot 3D facial avatar reconstruction framework that only requires a single source image to reconstruct a high-fidelity 3D facial avatar.
arXiv Detail & Related papers (2023-07-07T07:58:18Z) - NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images [18.489290898059462]
This paper presents a novel 3D face rendering model, namely NeuFace, to learn accurate and physically-meaningful underlying 3D representations.
We introduce an approximated BRDF integration and a simple yet new low-rank prior, which effectively lower the ambiguities and boost the performance of the facial BRDFs.
arXiv Detail & Related papers (2023-03-24T15:57:39Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Head2Head++: Deep Facial Attributes Re-Targeting [6.230979482947681]
We leverage the 3D geometry of faces and Generative Adversarial Networks (GANs) to design a novel deep learning architecture for the task of facial and head reenactment.
We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos.
Our system performs end-to-end reenactment in nearly real-time speed (18 fps)
arXiv Detail & Related papers (2020-06-17T23:38:37Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.