A Local Appearance Model for Volumetric Capture of Diverse Hairstyle
- URL: http://arxiv.org/abs/2312.08679v1
- Date: Thu, 14 Dec 2023 06:29:59 GMT
- Title: A Local Appearance Model for Volumetric Capture of Diverse Hairstyle
- Authors: Ziyan Wang, Giljoo Nam, Aljaz Bozic, Chen Cao, Jason Saragih, Michael
Zollhoefer, Jessica Hodgins
- Abstract summary: Hair plays a significant role in personal identity and appearance, making it an essential component of high-quality, photorealistic avatars.
Existing approaches either focus on modeling the facial region only or rely on personalized models, limiting their generalizability and scalability.
We present a novel method for creating high-fidelity avatars with diverse hairstyles.
- Score: 15.122893482253069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hair plays a significant role in personal identity and appearance, making it
an essential component of high-quality, photorealistic avatars. Existing
approaches either focus on modeling the facial region only or rely on
personalized models, limiting their generalizability and scalability. In this
paper, we present a novel method for creating high-fidelity avatars with
diverse hairstyles. Our method leverages the local similarity across different
hairstyles and learns a universal hair appearance prior from multi-view
captures of hundreds of people. This prior model takes 3D-aligned features as
input and generates dense radiance fields conditioned on a sparse point cloud
with color. As our model splits different hairstyles into local primitives and
builds prior at that level, it is capable of handling various hair topologies.
Through experiments, we demonstrate that our model captures a diverse range of
hairstyles and generalizes well to challenging new hairstyles. Empirical
results show that our method improves the state-of-the-art approaches in
capturing and generating photorealistic, personalized avatars with complete
hair.
Related papers
- TANGLED: Generating 3D Hair Strands from Images with Arbitrary Styles and Viewpoints [38.95048174663582]
Existing text or image-guided generation methods fail to handle the richness and complexity of diverse styles.
We present TANGLED, a novel approach for 3D hair strand generation that accommodates diverse image inputs across styles, viewpoints, and quantities of input views.
arXiv Detail & Related papers (2025-02-10T12:26:02Z) - Hairmony: Fairness-aware hairstyle classification [10.230933455074634]
We present a method for prediction of a person's hairstyle from a single image.
We use only synthetic data to train our models.
We introduce a novel hairstyle taxonomy developed in collaboration with a diverse group of domain experts.
arXiv Detail & Related papers (2024-10-15T12:00:36Z) - What to Preserve and What to Transfer: Faithful, Identity-Preserving Diffusion-based Hairstyle Transfer [35.80645300182437]
Existing hairstyle transfer approaches rely on StyleGAN.
We propose a one-stage hairstyle transfer diffusion model, HairFusion, that applies to real-world scenarios.
Our method achieves state-of-the-art performance compared to the existing methods in preserving the integrity of both the transferred hairstyle and the surrounding features.
arXiv Detail & Related papers (2024-08-29T11:30:21Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - VINECS: Video-based Neural Character Skinning [82.39776643541383]
We propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights.
We show that our approach outperforms state-of-the-art while not relying on dense 4D scans.
arXiv Detail & Related papers (2023-07-03T08:35:53Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Style Your Hair: Latent Optimization for Pose-Invariant Hairstyle
Transfer via Local-Style-Aware Hair Alignment [29.782276472922398]
We propose a pose-invariant hairstyle transfer model equipped with latent optimization and a newly presented local-style-matching loss.
Our model has strengths in transferring a hairstyle under larger pose differences and preserving local hairstyle textures.
arXiv Detail & Related papers (2022-08-16T14:23:54Z) - HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment
and Semantic-Region-Aware Inpainting [26.688276902813495]
We propose a novel framework for pose-invariant hairstyle transfer, HairFIT.
Our model consists of two stages: 1) flow-based hair alignment and 2) hair synthesis.
Our SIM estimator divides the occluded regions in the source image into different semantic regions to reflect their distinct features during the inpainting.
arXiv Detail & Related papers (2022-06-17T06:55:20Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.