HairCUP: Hair Compositional Universal Prior for 3D Gaussian Avatars
- URL: http://arxiv.org/abs/2507.19481v1
- Date: Fri, 25 Jul 2025 17:59:53 GMT
- Title: HairCUP: Hair Compositional Universal Prior for 3D Gaussian Avatars
- Authors: Byungjun Kim, Shunsuke Saito, Giljoo Nam, Tomas Simon, Jason Saragih, Hanbyul Joo, Junxuan Li,
- Abstract summary: We present a universal prior model for 3D head avatars with explicit hair compositionality.<n>Our model's inherent compositionality enables seamless transfer of face and hair components between avatars.
- Score: 29.819374818200885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a universal prior model for 3D head avatars with explicit hair compositionality. Existing approaches to build generalizable priors for 3D head avatars often adopt a holistic modeling approach, treating the face and hair as an inseparable entity. This overlooks the inherent compositionality of the human head, making it difficult for the model to naturally disentangle face and hair representations, especially when the dataset is limited. Furthermore, such holistic models struggle to support applications like 3D face and hairstyle swapping in a flexible and controllable manner. To address these challenges, we introduce a prior model that explicitly accounts for the compositionality of face and hair, learning their latent spaces separately. A key enabler of this approach is our synthetic hairless data creation pipeline, which removes hair from studio-captured datasets using estimated hairless geometry and texture derived from a diffusion prior. By leveraging a paired dataset of hair and hairless captures, we train disentangled prior models for face and hair, incorporating compositionality as an inductive bias to facilitate effective separation. Our model's inherent compositionality enables seamless transfer of face and hair components between avatars while preserving identity. Additionally, we demonstrate that our model can be fine-tuned in a few-shot manner using monocular captures to create high-fidelity, hair-compositional 3D head avatars for unseen subjects. These capabilities highlight the practical applicability of our approach in real-world scenarios, paving the way for flexible and expressive 3D avatar generation.
Related papers
- 3DGH: 3D Head Generation with Composable Hair and Face [21.770533642873662]
3DGH is an unconditional generative model for 3D human heads with composable hair and face components.<n>We propose to separate them using a novel data representation with template-based 3D Gaussian Splatting.<n>We conduct extensive experiments to validate the design choice of 3DGH, and evaluate it both qualitatively and quantitatively.
arXiv Detail & Related papers (2025-06-25T22:53:52Z) - FRESA: Feedforward Reconstruction of Personalized Skinned Avatars from Few Images [74.86864398919467]
We present a novel method for reconstructing personalized 3D human avatars with realistic animation from only a few images.<n>We learn a universal prior from over a thousand clothed humans to achieve instant feedforward generation and zero-shot generalization.<n>Our method generates more authentic reconstruction and animation than state-of-the-arts, and can be directly generalized to inputs from casually taken phone photos.
arXiv Detail & Related papers (2025-03-24T23:20:47Z) - GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - A Local Appearance Model for Volumetric Capture of Diverse Hairstyle [15.122893482253069]
Hair plays a significant role in personal identity and appearance, making it an essential component of high-quality, photorealistic avatars.
Existing approaches either focus on modeling the facial region only or rely on personalized models, limiting their generalizability and scalability.
We present a novel method for creating high-fidelity avatars with diverse hairstyles.
arXiv Detail & Related papers (2023-12-14T06:29:59Z) - HHAvatar: Gaussian Head Avatar with Dynamic Hairs [27.20228210350169]
We proposeAvatar represented by controllable 3D Gaussians for high-fidelity head avatar with dynamic hair modeling.
Our approach outperforms other state-of-the-art sparse-view methods, achieving ultra high-fidelity rendering quality at 2K resolution.
arXiv Detail & Related papers (2023-12-05T11:01:44Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - Learning Disentangled Avatars with Hybrid 3D Representations [102.9632315060652]
We present Disentangled Avatars(DELTA) which models humans with hybrid explicit-implicit 3D representations.
We consider the disentanglement of the human body and clothing and in the second, we disentangle the face and hair.
We show how these two applications can be easily combined to model full-body avatars.
arXiv Detail & Related papers (2023-09-12T17:59:36Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - i3DMM: Deep Implicit 3D Morphable Model of Human Heads [115.19943330455887]
We present the first deep implicit 3D morphable model (i3DMM) of full heads.
It not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair.
We show the merits of i3DMM using ablation studies, comparisons to state-of-the-art models, and applications such as semantic head editing and texture transfer.
arXiv Detail & Related papers (2020-11-28T15:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.