High-Fidelity 3D Head Avatars Reconstruction through Spatially-Varying
Expression Conditioned Neural Radiance Field
- URL: http://arxiv.org/abs/2310.06275v1
- Date: Tue, 10 Oct 2023 03:13:33 GMT
- Title: High-Fidelity 3D Head Avatars Reconstruction through Spatially-Varying
Expression Conditioned Neural Radiance Field
- Authors: Minghan Qin, Yifan Liu, Yuelang Xu, Xiaochen Zhao, Yebin Liu, Haoqian
Wang
- Abstract summary: We introduce a novel Spatially-Varying Expression (SVE) conditioning.
The proposed SVE-conditioned neural radiance field can deal with intricate facial expressions and achieve realistic rendering and geometry details of high-fidelity 3D head avatars.
Our method outperforms other state-of-the-art (SOTA) methods in rendering and geometry quality on mobile phone-collected and public datasets.
- Score: 44.37605022793316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One crucial aspect of 3D head avatar reconstruction lies in the details of
facial expressions. Although recent NeRF-based photo-realistic 3D head avatar
methods achieve high-quality avatar rendering, they still encounter challenges
retaining intricate facial expression details because they overlook the
potential of specific expression variations at different spatial positions when
conditioning the radiance field. Motivated by this observation, we introduce a
novel Spatially-Varying Expression (SVE) conditioning. The SVE can be obtained
by a simple MLP-based generation network, encompassing both spatial positional
features and global expression information. Benefiting from rich and diverse
information of the SVE at different positions, the proposed SVE-conditioned
neural radiance field can deal with intricate facial expressions and achieve
realistic rendering and geometry details of high-fidelity 3D head avatars.
Additionally, to further elevate the geometric and rendering quality, we
introduce a new coarse-to-fine training strategy, including a geometry
initialization strategy at the coarse stage and an adaptive importance sampling
strategy at the fine stage. Extensive experiments indicate that our method
outperforms other state-of-the-art (SOTA) methods in rendering and geometry
quality on mobile phone-collected and public datasets.
Related papers
- HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural
Radiance Field [44.848368616444446]
We introduce a novel hybrid explicit-implicit 3D representation, Facial Model Conditioned Neural Radiance Field, which integrates the expressiveness of NeRF and the prior information from the parametric template.
By adopting an overall GAN-based architecture using an image-to-image translation network, we achieve high-resolution, realistic and view-consistent synthesis of dynamic head appearance.
arXiv Detail & Related papers (2023-09-29T10:45:22Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.