TexAvatars : Hybrid Texel-3D Representations for Stable Rigging of Photorealistic Gaussian Head Avatars
- URL: http://arxiv.org/abs/2512.21099v1
- Date: Wed, 24 Dec 2025 10:50:04 GMT
- Title: TexAvatars : Hybrid Texel-3D Representations for Stable Rigging of Photorealistic Gaussian Head Avatars
- Authors: Jaeseong Lee, Junyeong Ahn, Taewoong Kang, Jaegul Choo,
- Abstract summary: TexAvatars is a hybrid representation that combines the explicit geometric grounding of analytic rigging with the spatial continuity of texel space.<n>Our approach predicts local geometric attributes in UV space via CNNs, but drives 3D deformation through mesh-aware Jacobians.<n>Our method achieves state-of-the-art performance under extreme pose and expression variations, demonstrating strong generalization in challenging head reenactment settings.
- Score: 47.957612931386926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Constructing drivable and photorealistic 3D head avatars has become a central task in AR/XR, enabling immersive and expressive user experiences. With the emergence of high-fidelity and efficient representations such as 3D Gaussians, recent works have pushed toward ultra-detailed head avatars. Existing approaches typically fall into two categories: rule-based analytic rigging or neural network-based deformation fields. While effective in constrained settings, both approaches often fail to generalize to unseen expressions and poses, particularly in extreme reenactment scenarios. Other methods constrain Gaussians to the global texel space of 3DMMs to reduce rendering complexity. However, these texel-based avatars tend to underutilize the underlying mesh structure. They apply minimal analytic deformation and rely heavily on neural regressors and heuristic regularization in UV space, which weakens geometric consistency and limits extrapolation to complex, out-of-distribution deformations. To address these limitations, we introduce TexAvatars, a hybrid avatar representation that combines the explicit geometric grounding of analytic rigging with the spatial continuity of texel space. Our approach predicts local geometric attributes in UV space via CNNs, but drives 3D deformation through mesh-aware Jacobians, enabling smooth and semantically meaningful transitions across triangle boundaries. This hybrid design separates semantic modeling from geometric control, resulting in improved generalization, interpretability, and stability. Furthermore, TexAvatars captures fine-grained expression effects, including muscle-induced wrinkles, glabellar lines, and realistic mouth cavity geometry, with high fidelity. Our method achieves state-of-the-art performance under extreme pose and expression variations, demonstrating strong generalization in challenging head reenactment settings.
Related papers
- CAG-Avatar: Cross-Attention Guided Gaussian Avatars for High-Fidelity Head Reconstruction [7.698661374784336]
Animation techniques often rely on a "one-size-fits-all" global tuning approach.<n>We introduce Conditionally- Adaptive Fusion Module built on cross-attention.<n>Experiments confirm a significant improvement in reconstruction fidelity, particularly for challenging regions such as teeth.
arXiv Detail & Related papers (2026-01-21T10:22:53Z) - CaricatureGS: Exaggerating 3D Gaussian Splatting Faces With Gaussian Curvature [13.47263744740423]
A controllable and controllable 3D caricaturization framework for faces is introduced.<n>We resort to 3D Gaussian Splatting (3DGS), which has recently been shown to produce realistic free-viewpoint avatars.
arXiv Detail & Related papers (2026-01-06T13:56:28Z) - ImHead: A Large-scale Implicit Morphable Model for Localized Head Modeling [71.3859346921118]
imHead is a novel implicit 3DMM that not only models expressive 3D head avatars but also facilitates localized editing of the facial features.<n>To train imHead, we curate a large-scale dataset of 4K distinct identities.
arXiv Detail & Related papers (2025-10-12T20:17:34Z) - A Controllable 3D Deepfake Generation Framework with Gaussian Splatting [6.969908558294805]
We propose a novel 3D deepfake generation framework based on 3D Gaussian Splatting.<n>It enables realistic, identity-preserving face swapping and reenactment in a fully controllable 3D space.<n>Our approach bridges the gap between 3D modeling and deepfake synthesis, enabling new directions for scene-aware, controllable, and immersive visual forgeries.
arXiv Detail & Related papers (2025-09-15T06:34:17Z) - MoGA: 3D Generative Avatar Prior for Monocular Gaussian Avatar Reconstruction [65.5412504339528]
MoGA is a novel method to reconstruct high-fidelity 3D Gaussian avatars from a single-view image.<n>Our method surpasses state-of-the-art techniques and generalizes well to real-world scenarios.
arXiv Detail & Related papers (2025-07-31T14:36:24Z) - GeoAvatar: Adaptive Geometrical Gaussian Splatting for 3D Head Avatar [7.382127185479743]
Existing methods struggle to adapt Gaussians to varying geometrical deviations across facial regions.<n>We propose GeoAvatar, a framework for adaptive geometrical Gaussian Splatting.<n>We also release DynamicFace, a video dataset with highly expressive facial motions.
arXiv Detail & Related papers (2025-07-24T07:41:40Z) - TeGA: Texture Space Gaussian Avatars for High-Resolution Dynamic Head Modeling [52.87836237427514]
Photoreal avatars are seen as a key component in emerging applications in telepresence, extended reality, and entertainment.<n>We present a new high-detail 3D head avatar model that improves upon the state of the art.
arXiv Detail & Related papers (2025-05-08T22:10:27Z) - Creating Your Editable 3D Photorealistic Avatar with Tetrahedron-constrained Gaussian Splatting [17.908135908777325]
We introduce a framework that decouples the editing process into local spatial adaptation and realistic appearance learning.<n>The framework combines the controllable explicit structure of tetrahedral grids with the high-precision rendering capabilities of 3D Gaussian Splatting.<n>Both qualitative and quantitative experiments demonstrate the effectiveness and superiority of our approach in generating photorealistic 3D editable avatars.
arXiv Detail & Related papers (2025-04-29T03:56:36Z) - MonoGSDF: Exploring Monocular Geometric Cues for Gaussian Splatting-Guided Implicit Surface Reconstruction [86.87464903285208]
We introduce MonoGSDF, a novel method that couples primitives with a neural Signed Distance Field (SDF) for high-quality reconstruction.<n>To handle arbitrary-scale scenes, we propose a scaling strategy for robust generalization.<n>Experiments on real-world datasets outperforms prior methods while maintaining efficiency.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.<n> UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.<n>Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - GaussianHead: High-fidelity Head Avatars with Learnable Gaussian Derivation [35.39887092268696]
This paper presents a framework to model the actional human head with anisotropic 3D Gaussians.<n>In experiments, our method can produce high-fidelity renderings, outperforming state-of-the-art approaches in reconstruction, cross-identity reenactment, and novel view synthesis tasks.
arXiv Detail & Related papers (2023-12-04T05:24:45Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.