Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians
- URL: http://arxiv.org/abs/2312.03029v2
- Date: Sat, 30 Mar 2024 14:19:10 GMT
- Title: Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians
- Authors: Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu,
- Abstract summary: We propose controllable 3D Gaussian Head Avatars for lightweight sparse-view setups.
We show our approach outperforms other state-of-the-art sparse-view methods, achieving ultra high-fidelity rendering quality at 2K resolution even under exaggerated expressions.
- Score: 41.86540576028268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups. In this paper, we propose Gaussian Head Avatar represented by controllable 3D Gaussians for high-fidelity head avatar modeling. We optimize the neutral 3D Gaussians and a fully learned MLP-based deformation field to capture complex expressions. The two parts benefit each other, thereby our method can model fine-grained dynamic details while ensuring expression accuracy. Furthermore, we devise a well-designed geometry-guided initialization strategy based on implicit SDF and Deep Marching Tetrahedra for the stability and convergence of the training procedure. Experiments show our approach outperforms other state-of-the-art sparse-view methods, achieving ultra high-fidelity rendering quality at 2K resolution even under exaggerated expressions.
Related papers
- Generalizable and Animatable Gaussian Head Avatar [50.34788590904843]
We propose Generalizable and Animatable Gaussian head Avatar (GAGAvatar) for one-shot animatable head avatar reconstruction.
We generate the parameters of 3D Gaussians from a single image in a single forward pass.
Our method exhibits superior performance compared to previous methods in terms of reconstruction quality and expression accuracy.
arXiv Detail & Related papers (2024-10-10T14:29:00Z) - Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities [10.816370283498287]
We introduce the "Gaussian Deja-vu" framework, which first obtains a generalized model of the head avatar and then personalizes the result.
For personalizing, we propose learnable expression-aware rectification blendmaps, ensuring rapid convergence without the reliance on neural networks.
It outperforms state-of-the-art 3D Gaussian head avatars in terms of photorealistic quality as well as reduces training time consumption to at least a quarter of the existing methods.
arXiv Detail & Related papers (2024-09-23T00:11:30Z) - GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction [47.113910048252805]
High-fidelity 3D human head avatars are crucial for applications in VR/AR, digital human, and film production.
Recent advances have leveraged morphable face models to generate animated head avatars, representing varying identities and expressions.
We introduce 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head.
arXiv Detail & Related papers (2024-07-21T06:03:11Z) - FAGhead: Fully Animate Gaussian Head from Monocular Videos [2.9979421496374683]
FAGhead is a method that enables fully controllable human portraits from monocular videos.
We explicit the traditional 3D morphable meshes (3DMM) and optimize the neutral 3D Gaussians to reconstruct with complex expressions.
To effectively manage the edges of avatars, we introduced the alpha rendering to supervise the alpha value of each pixel.
arXiv Detail & Related papers (2024-06-27T10:40:35Z) - GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos [56.40776739573832]
We present a novel method that facilitates the creation of vivid 3D Gaussian avatars from monocular video inputs (GVA)
Our innovation lies in addressing the intricate challenges of delivering high-fidelity human body reconstructions.
We introduce a pose refinement technique to improve hand and foot pose accuracy by aligning normal maps and silhouettes.
arXiv Detail & Related papers (2024-02-26T14:40:15Z) - GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning [60.33970027554299]
Gaussian splatting has emerged as a powerful 3D representation that harnesses the advantages of both explicit (mesh) and implicit (NeRF) 3D representations.
In this paper, we seek to leverage Gaussian splatting to generate realistic animatable avatars from textual descriptions.
Our proposed method, GAvatar, enables the large-scale generation of diverse animatable avatars using only text prompts.
arXiv Detail & Related papers (2023-12-18T18:59:12Z) - MonoGaussianAvatar: Monocular Gaussian Point-based Head Avatar [44.125711148560605]
MonoGaussianAvatar is a novel approach that harnesses 3D Gaussian point representation and a Gaussian deformation field to learn explicit head avatars from monocular portrait videos.
Experiments demonstrate the superior performance of our method, which achieves state-of-the-art results among previous methods.
arXiv Detail & Related papers (2023-12-07T18:59:31Z) - GaussianHead: High-fidelity Head Avatars with Learnable Gaussian Derivation [35.39887092268696]
This paper presents a framework to model the actional human head with anisotropic 3D Gaussians.
In experiments, our method can produce high-fidelity renderings, outperforming state-of-the-art approaches in reconstruction, cross-identity reenactment, and novel view synthesis tasks.
arXiv Detail & Related papers (2023-12-04T05:24:45Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.