NOFA: NeRF-based One-shot Facial Avatar Reconstruction
- URL: http://arxiv.org/abs/2307.03441v1
- Date: Fri, 7 Jul 2023 07:58:18 GMT
- Title: NOFA: NeRF-based One-shot Facial Avatar Reconstruction
- Authors: Wangbo Yu, Yanbo Fan, Yong Zhang, Xuan Wang, Fei Yin, Yunpeng Bai,
Yan-Pei Cao, Ying Shan, Yang Wu, Zhongqian Sun, Baoyuan Wu
- Abstract summary: 3D facial avatar reconstruction has been a significant research topic in computer graphics and computer vision.
We propose a one-shot 3D facial avatar reconstruction framework that only requires a single source image to reconstruct a high-fidelity 3D facial avatar.
- Score: 45.11455702291703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D facial avatar reconstruction has been a significant research topic in
computer graphics and computer vision, where photo-realistic rendering and
flexible controls over poses and expressions are necessary for many related
applications. Recently, its performance has been greatly improved with the
development of neural radiance fields (NeRF). However, most existing NeRF-based
facial avatars focus on subject-specific reconstruction and reenactment,
requiring multi-shot images containing different views of the specific subject
for training, and the learned model cannot generalize to new identities,
limiting its further applications. In this work, we propose a one-shot 3D
facial avatar reconstruction framework that only requires a single source image
to reconstruct a high-fidelity 3D facial avatar. For the challenges of lacking
generalization ability and missing multi-view information, we leverage the
generative prior of 3D GAN and develop an efficient encoder-decoder network to
reconstruct the canonical neural volume of the source image, and further
propose a compensation network to complement facial details. To enable
fine-grained control over facial dynamics, we propose a deformation field to
warp the canonical volume into driven expressions. Through extensive
experimental comparisons, we achieve superior synthesis results compared to
several state-of-the-art methods.
Related papers
- Single Image, Any Face: Generalisable 3D Face Generation [59.9369171926757]
We propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input.
To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images.
arXiv Detail & Related papers (2024-09-25T14:56:37Z) - GPAvatar: Generalizable and Precise Head Avatar from Image(s) [71.555405205039]
GPAvatar is a framework that reconstructs 3D head avatars from one or several images in a single forward pass.
The proposed method achieves faithful identity reconstruction, precise expression control, and multi-view consistency.
arXiv Detail & Related papers (2024-01-18T18:56:34Z) - InvertAvatar: Incremental GAN Inversion for Generalized Head Avatars [40.10906393484584]
We propose a novel framework that enhances avatar reconstruction performance using an algorithm designed to increase the fidelity from multiple frames.
Our architecture emphasizes pixel-aligned image-to-image translation, mitigating the need to learn correspondences between observation and canonical spaces.
The proposed paradigm demonstrates state-of-the-art performance on one-shot and few-shot avatar animation tasks.
arXiv Detail & Related papers (2023-12-03T18:59:15Z) - MA-NeRF: Motion-Assisted Neural Radiance Fields for Face Synthesis from
Sparse Images [21.811067296567252]
We propose a novel framework that can reconstruct a high-fidelity drivable face avatar and handle unseen expressions.
At the core of our implementation are structured displacement feature and semantic-aware learning module.
Our method achieves much better results than the current state-of-the-arts.
arXiv Detail & Related papers (2023-06-17T13:49:56Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - High-fidelity Facial Avatar Reconstruction from Monocular Video with
Generative Priors [29.293166730794606]
We propose a new method for NeRF-based facial avatar reconstruction that utilizes 3D-aware generative prior.
Compared with existing works, we obtain superior novel view synthesis results and faithfully face reenactment performance.
arXiv Detail & Related papers (2022-11-28T04:49:46Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Head2Head++: Deep Facial Attributes Re-Targeting [6.230979482947681]
We leverage the 3D geometry of faces and Generative Adversarial Networks (GANs) to design a novel deep learning architecture for the task of facial and head reenactment.
We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos.
Our system performs end-to-end reenactment in nearly real-time speed (18 fps)
arXiv Detail & Related papers (2020-06-17T23:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.