GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction
- URL: http://arxiv.org/abs/2407.15070v2
- Date: Wed, 23 Oct 2024 02:35:58 GMT
- Title: GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction
- Authors: Yuelang Xu, Zhaoqi Su, Qingyao Wu, Yebin Liu,
- Abstract summary: High-fidelity 3D human head avatars are crucial for applications in VR/AR, digital human, and film production.
Recent advances have leveraged morphable face models to generate animated head avatars, representing varying identities and expressions.
We introduce 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head.
- Score: 47.113910048252805
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Creating high-fidelity 3D human head avatars is crucial for applications in VR/AR, digital human, and film production. Recent advances have leveraged morphable face models to generate animated head avatars from easily accessible data, representing varying identities and expressions within a low-dimensional parametric space. However, existing methods often struggle with modeling complex appearance details, e.g., hairstyles, and suffer from low rendering quality and efficiency. In this paper we introduce a novel approach, 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head, allowing precise control over both identity and expression. The Gaussian model can handle intricate details, enabling realistic representations of varying appearances and complex expressions. Furthermore, we presents a well-designed training framework to ensure smooth convergence, providing a robust guarantee for learning the rich content. Our method achieves high-quality, photo-realistic rendering with real-time efficiency, making it a valuable contribution to the field of parametric head models. Finally, we apply the 3D Gaussian Parametric Head Model to monocular video or few-shot head avatar reconstruction tasks, which enables instant reconstruction of high-quality 3D head avatars even when input data is extremely limited, surpassing previous methods in terms of reconstruction quality and training speed.
Related papers
- Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities [10.816370283498287]
We introduce the "Gaussian Deja-vu" framework, which first obtains a generalized model of the head avatar and then personalizes the result.
For personalizing, we propose learnable expression-aware rectification blendmaps, ensuring rapid convergence without the reliance on neural networks.
It outperforms state-of-the-art 3D Gaussian head avatars in terms of photorealistic quality as well as reduces training time consumption to at least a quarter of the existing methods.
arXiv Detail & Related papers (2024-09-23T00:11:30Z) - Gaussian3Diff: 3D Gaussian Diffusion for 3D Full Head Synthesis and
Editing [53.05069432989608]
We present a novel framework for generating 3D human heads with remarkable flexibility.
Our method facilitates the creation of diverse and realistic 3D human heads with fine-grained editing over facial features and expressions.
arXiv Detail & Related papers (2023-12-05T19:05:58Z) - HHAvatar: Gaussian Head Avatar with Dynamic Hairs [27.20228210350169]
We proposeAvatar represented by controllable 3D Gaussians for high-fidelity head avatar with dynamic hair modeling.
Our approach outperforms other state-of-the-art sparse-view methods, achieving ultra high-fidelity rendering quality at 2K resolution.
arXiv Detail & Related papers (2023-12-05T11:01:44Z) - GaussianHead: High-fidelity Head Avatars with Learnable Gaussian Derivation [35.39887092268696]
This paper presents a framework to model the actional human head with anisotropic 3D Gaussians.
In experiments, our method can produce high-fidelity renderings, outperforming state-of-the-art approaches in reconstruction, cross-identity reenactment, and novel view synthesis tasks.
arXiv Detail & Related papers (2023-12-04T05:24:45Z) - HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural
Radiance Field [44.848368616444446]
We introduce a novel hybrid explicit-implicit 3D representation, Facial Model Conditioned Neural Radiance Field, which integrates the expressiveness of NeRF and the prior information from the parametric template.
By adopting an overall GAN-based architecture using an image-to-image translation network, we achieve high-resolution, realistic and view-consistent synthesis of dynamic head appearance.
arXiv Detail & Related papers (2023-09-29T10:45:22Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.