GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
- URL: http://arxiv.org/abs/2505.05376v2
- Date: Fri, 09 May 2025 21:25:03 GMT
- Title: GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans
- Authors: Rachmadio Noval Lazuardi, Artem Sevastopolsky, Egor Zakharov, Matthias Niessner, Vanessa Sklyarova,
- Abstract summary: We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction.<n>We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles without relying on color information.
- Score: 4.498049448460985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction. Hair strand reconstruction is a fundamental problem in computer vision and graphics that can be used for high-fidelity digital avatar synthesis, animation, and AR/VR applications. However, accurately recovering hair strands from raw scan data remains challenging due to human hair's complex and fine-grained structure. Existing methods typically rely on RGB captures, which can be sensitive to the environment and can be a challenging domain for extracting the orientation of guiding strands, especially in the case of challenging hairstyles. To reconstruct the hair purely from the observed geometry, our method finds sharp surface features directly on the scan and estimates strand orientation through a neural 2D line detector applied to the renderings of scan shading. Additionally, we incorporate a diffusion prior trained on a diverse set of synthetic hair scans, refined with an improved noise schedule, and adapted to the reconstructed contents via a scan-specific text prompt. We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles without relying on color information. To facilitate further research, we introduce Strands400, the largest publicly available dataset of hair strands with detailed surface geometry extracted from real-world data, which contains reconstructed hair strands from the scans of 400 subjects.
Related papers
- HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting [50.93221272778306]
Human hair reconstruction is a challenging problem in computer vision.<n>We extend the 3DGS framework to enable strand-level hair geometry reconstruction from multi-view images.<n>Our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour.
arXiv Detail & Related papers (2025-09-09T14:08:41Z) - Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars [60.99229760565975]
We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization.<n>We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images.
arXiv Detail & Related papers (2025-09-01T13:38:08Z) - PanoHair: Detailed Hair Strand Synthesis on Volumetric Heads [12.710733307422055]
Existing methods require a complex setup for data acquisition, involving multi-view images captured in constrained studio environments.<n>We introduce PanoHair, a model that estimates head geometry as signed distance fields using knowledge distillation from a pre-trained generative teacher model for head synthesis.
arXiv Detail & Related papers (2025-08-26T11:36:14Z) - DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models [53.08138861924767]
We propose DiffLocks, a novel framework that enables reconstruction of a wide variety of hairstyles directly from a single image.<n>First, we address the lack of 3D hair data by automating the creation of the largest synthetic hair dataset to date, containing 40K hairstyles.<n>By using a pretrained image backbone, our method generalizes to in-the-wild images despite being trained only on synthetic data.
arXiv Detail & Related papers (2025-05-09T16:16:42Z) - Towards Unified 3D Hair Reconstruction from Single-View Portraits [27.404011546957104]
We propose a novel strategy to enable single-view 3D reconstruction for a variety of hair types via a unified pipeline.
Our experiments demonstrate that reconstructing braided and un-braided 3D hair from single-view images via a unified approach is possible.
arXiv Detail & Related papers (2024-09-25T12:21:31Z) - Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-training via Differentiable Rendering of Line Segments [23.71057752711745]
In the film and gaming industries, achieving a realistic hair appearance typically involves the use of strands originating from the scalp.
In this study, we propose an optimization-based approach that eliminates the need for pre-training.
Our method exhibits robust and accurate inverse rendering, surpassing the quality of existing methods and significantly improving processing speed.
arXiv Detail & Related papers (2024-03-26T08:53:25Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.<n> UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.<n>Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Instant Multi-View Head Capture through Learnable Registration [62.70443641907766]
Existing methods for capturing datasets of 3D heads in dense semantic correspondence are slow.
We introduce TEMPEH to directly infer 3D heads in dense correspondence from calibrated multi-view images.
Predicting one head takes about 0.3 seconds with a median reconstruction error of 0.26 mm, 64% lower than the current state-of-the-art.
arXiv Detail & Related papers (2023-06-12T21:45:18Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.