Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars
- URL: http://arxiv.org/abs/2509.01469v1
- Date: Mon, 01 Sep 2025 13:38:08 GMT
- Title: Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars
- Authors: Vanessa Sklyarova, Egor Zakharov, Malte Prinzler, Giorgio Becherini, Michael J. Black, Justus Thies,
- Abstract summary: We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization.<n>We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images.
- Score: 60.99229760565975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization. Capturing strand-based hair geometry from single photographs is challenging due to the variety and geometric complexity of hairstyles and the lack of ground truth training data. Classical reconstruction methods like multi-view stereo only reconstruct the visible hair strands, missing the inner structure of hairstyles and hampering realistic hair simulation. To address this, existing methods leverage hairstyle priors trained on synthetic data. Such data, however, is limited in both quantity and quality since it requires manual work from skilled artists to model the 3D hairstyles and create near-photorealistic renderings. To address this, we propose a novel approach that uses both, real and synthetic data to learn an effective hairstyle prior. Specifically, we train a transformer-based prior model on synthetic data to obtain knowledge of the internal hairstyle geometry and introduce real data in the learning process to model the outer structure. This training scheme is able to model the visible hair strands depicted in an input image, while preserving the general 3D structure of hairstyles. We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images. Qualitative and quantitative comparisons with existing reconstruction pipelines demonstrate the effectiveness and superior performance of our method for capturing detailed hair orientation, overall silhouette, and backside consistency. For additional results and code, please refer to https://im2haircut.is.tue.mpg.de.
Related papers
- HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting [50.93221272778306]
Human hair reconstruction is a challenging problem in computer vision.<n>We extend the 3DGS framework to enable strand-level hair geometry reconstruction from multi-view images.<n>Our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour.
arXiv Detail & Related papers (2025-09-09T14:08:41Z) - DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models [53.08138861924767]
We propose DiffLocks, a novel framework that enables reconstruction of a wide variety of hairstyles directly from a single image.<n>First, we address the lack of 3D hair data by automating the creation of the largest synthetic hair dataset to date, containing 40K hairstyles.<n>By using a pretrained image backbone, our method generalizes to in-the-wild images despite being trained only on synthetic data.
arXiv Detail & Related papers (2025-05-09T16:16:42Z) - Towards Unified 3D Hair Reconstruction from Single-View Portraits [27.404011546957104]
We propose a novel strategy to enable single-view 3D reconstruction for a variety of hair types via a unified pipeline.
Our experiments demonstrate that reconstructing braided and un-braided 3D hair from single-view images via a unified approach is possible.
arXiv Detail & Related papers (2024-09-25T12:21:31Z) - Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - Perm: A Parametric Representation for Multi-Style 3D Hair Modeling [22.790597419351528]
Perm is a learned parametric representation of human 3D hair designed to facilitate various hair-related applications.<n>We leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures.
arXiv Detail & Related papers (2024-07-28T10:05:11Z) - MonoHair: High-Fidelity Hair Modeling from a Monocular Video [40.27026803872373]
MonoHair is a generic framework to achieve high-fidelity hair reconstruction from a monocular video.
Our approach bifurcates the hair modeling process into two main stages: precise exterior reconstruction and interior structure inference.
Our experiments demonstrate that our method exhibits robustness across diverse hairstyles and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-03-27T08:48:47Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.