StrandHead: Text to Strand-Disentangled 3D Head Avatars Using Hair Geometric Priors
- URL: http://arxiv.org/abs/2412.11586v2
- Date: Thu, 19 Dec 2024 03:43:18 GMT
- Title: StrandHead: Text to Strand-Disentangled 3D Head Avatars Using Hair Geometric Priors
- Authors: Xiaokun Sun, Zeyu Cai, Ying Tai, Jian Yang, Zhenyu Zhang,
- Abstract summary: StrandHead is a novel text to 3D head avatar generation method capable of generating disentangled 3D hair with strand representation.
We show that StrandHead achieves the state-of-the-art reality and diversity of generated 3D head and hair.
The generated 3D hair can also be easily implemented in the Unreal Engine for physical simulation and other applications.
- Score: 33.00657081996672
- License:
- Abstract: While haircut indicates distinct personality, existing avatar generation methods fail to model practical hair due to the general or entangled representation. We propose StrandHead, a novel text to 3D head avatar generation method capable of generating disentangled 3D hair with strand representation. Without using 3D data for supervision, we demonstrate that realistic hair strands can be generated from prompts by distilling 2D generative diffusion models. To this end, we propose a series of reliable priors on shape initialization, geometric primitives, and statistical haircut features, leading to a stable optimization and text-aligned performance. Extensive experiments show that StrandHead achieves the state-of-the-art reality and diversity of generated 3D head and hair. The generated 3D hair can also be easily implemented in the Unreal Engine for physical simulation and other applications. The code will be available at https://xiaokunsun.github.io/StrandHead.github.io.
Related papers
- Generating Editable Head Avatars with 3D Gaussian GANs [57.51487984425395]
Traditional 3D-aware generative adversarial networks (GANs) achieve photorealistic and view-consistent 3D head synthesis.
We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation.
Our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability.
arXiv Detail & Related papers (2024-12-26T10:10:03Z) - SimAvatar: Simulation-Ready Avatars with Layered Hair and Clothing [59.44721317364197]
We introduce SimAvatar, a framework designed to generate simulation-ready clothed 3D human avatars from a text prompt.
Our method is the first to produce highly realistic, fully simulation-ready 3D avatars, surpassing the capabilities of current approaches.
arXiv Detail & Related papers (2024-12-12T18:35:26Z) - Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting [43.978358118034514]
HeadStudio is a framework that generates realistic and animatable avatars from text prompts.
The avatars are capable of rendering high-quality real-time views at a resolution of 1024 fps.
arXiv Detail & Related papers (2024-02-09T02:58:37Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - HeadSculpt: Crafting 3D Head Avatars with Text [143.14548696613886]
We introduce a versatile pipeline dubbed HeadSculpt for crafting 3D head avatars from textual prompts.
We first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding.
We propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique.
arXiv Detail & Related papers (2023-06-05T16:53:58Z) - NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image
Using Implicit Neural Representations [40.14104266690989]
We introduce NeuralHDHair, a flexible, fully automatic system for modeling high-fidelity hair from a single image.
We propose a novel voxel-aligned implicit function (VIFu) to represent the global hair feature.
To improve the efficiency of a traditional hair growth algorithm, we adopt a local neural implicit function to grow strands based on the estimated 3D hair geometric features.
arXiv Detail & Related papers (2022-05-09T10:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.