Generalizable and Animatable 3D Full-Head Gaussian Avatar from a Single Image
- URL: http://arxiv.org/abs/2601.12770v1
- Date: Mon, 19 Jan 2026 06:56:58 GMT
- Title: Generalizable and Animatable 3D Full-Head Gaussian Avatar from a Single Image
- Authors: Shuling Zhao, Dan Xu,
- Abstract summary: Building 3D animatable head avatars from a single image is an important yet challenging problem.<n>Existing methods generally collapse under large camera pose variations, compromising the realism of 3D avatars.<n>We propose a new framework to tackle the novel setting of one-shot 3D full-head animatable avatar reconstruction in a single feed-forward pass.
- Score: 9.505520774467263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building 3D animatable head avatars from a single image is an important yet challenging problem. Existing methods generally collapse under large camera pose variations, compromising the realism of 3D avatars. In this work, we propose a new framework to tackle the novel setting of one-shot 3D full-head animatable avatar reconstruction in a single feed-forward pass, enabling real-time animation and simultaneous 360$^\circ$ rendering views. To facilitate efficient animation control, we model 3D head avatars with Gaussian primitives embedded on the surface of a parametric face model within the UV space. To obtain knowledge of full-head geometry and textures, we leverage rich 3D full-head priors within a pretrained 3D generative adversarial network (GAN) for global full-head feature extraction and multi-view supervision. To increase the fidelity of the 3D reconstruction of the input image, we take advantage of the symmetric nature of the UV space and human faces to fuse local fine-grained input image features with the global full-head textures. Extensive experiments demonstrate the effectiveness of our method, achieving high-quality 3D full-head modeling as well as real-time animation, thereby improving the realism of 3D talking avatars.
Related papers
- From Blurry to Believable: Enhancing Low-quality Talking Heads with 3D Generative Priors [49.37666175170832]
We introduce SuperHead, a framework for enhancing low-resolution, animatable 3D head avatars.<n>SuperHead synthesizes high-quality geometry and textures, while ensuring both 3D and temporal consistency.<n>Experiments demonstrate that SuperHead generates avatars with fine-grained facial details under dynamic motions.
arXiv Detail & Related papers (2026-02-05T19:00:50Z) - MoGA: 3D Generative Avatar Prior for Monocular Gaussian Avatar Reconstruction [65.5412504339528]
MoGA is a novel method to reconstruct high-fidelity 3D Gaussian avatars from a single-view image.<n>Our method surpasses state-of-the-art techniques and generalizes well to real-world scenarios.
arXiv Detail & Related papers (2025-07-31T14:36:24Z) - TeGA: Texture Space Gaussian Avatars for High-Resolution Dynamic Head Modeling [52.87836237427514]
Photoreal avatars are seen as a key component in emerging applications in telepresence, extended reality, and entertainment.<n>We present a new high-detail 3D head avatar model that improves upon the state of the art.
arXiv Detail & Related papers (2025-05-08T22:10:27Z) - Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars [60.0866477932976]
We present Avat3r, which regresses a high-quality and animatable 3D head avatar from just a few input images.<n>We make Large Reconstruction Models animatable and learn a powerful prior over 3D human heads from a large multi-view video dataset.<n>We increase robustness by feeding input images with different expressions to our model during training, enabling the reconstruction of 3D head avatars from inconsistent inputs.
arXiv Detail & Related papers (2025-02-27T16:00:11Z) - FAGhead: Fully Animate Gaussian Head from Monocular Videos [2.9979421496374683]
FAGhead is a method that enables fully controllable human portraits from monocular videos.
We explicit the traditional 3D morphable meshes (3DMM) and optimize the neutral 3D Gaussians to reconstruct with complex expressions.
To effectively manage the edges of avatars, we introduced the alpha rendering to supervise the alpha value of each pixel.
arXiv Detail & Related papers (2024-06-27T10:40:35Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - DreamWaltz: Make a Scene with Complex 3D Animatable Avatars [68.49935994384047]
We present DreamWaltz, a novel framework for generating and animating complex 3D avatars given text guidance and parametric human body prior.
For animation, our method learns an animatable 3D avatar representation from abundant image priors of diffusion model conditioned on various poses.
arXiv Detail & Related papers (2023-05-21T17:59:39Z) - DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via
Diffusion Models [55.71306021041785]
We present DreamAvatar, a text-and-shape guided framework for generating high-quality 3D human avatars.
We leverage the SMPL model to provide shape and pose guidance for the generation.
We also jointly optimize the losses computed from the full body and from the zoomed-in 3D head to alleviate the common multi-face ''Janus'' problem.
arXiv Detail & Related papers (2023-04-03T12:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.