HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2509.07774v1
- Date: Tue, 09 Sep 2025 14:08:41 GMT
- Title: HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting
- Authors: Yimin Pan, Matthias Nießner, Tobias Kirschstein,
- Abstract summary: Human hair reconstruction is a challenging problem in computer vision.<n>We extend the 3DGS framework to enable strand-level hair geometry reconstruction from multi-view images.<n>Our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour.
- Score: 50.93221272778306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human hair reconstruction is a challenging problem in computer vision, with growing importance for applications in virtual reality and digital human modeling. Recent advances in 3D Gaussians Splatting (3DGS) provide efficient and explicit scene representations that naturally align with the structure of hair strands. In this work, we extend the 3DGS framework to enable strand-level hair geometry reconstruction from multi-view images. Our multi-stage pipeline first reconstructs detailed hair geometry using a differentiable Gaussian rasterizer, then merges individual Gaussian segments into coherent strands through a novel merging scheme, and finally refines and grows the strands under photometric supervision. While existing methods typically evaluate reconstruction quality at the geometric level, they often neglect the connectivity and topology of hair strands. To address this, we propose a new evaluation metric that serves as a proxy for assessing topological accuracy in strand reconstruction. Extensive experiments on both synthetic and real-world datasets demonstrate that our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour. The project page can be found at: https://yimin-pan.github.io/hair-gs/
Related papers
- Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars [60.99229760565975]
We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization.<n>We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images.
arXiv Detail & Related papers (2025-09-01T13:38:08Z) - PanoHair: Detailed Hair Strand Synthesis on Volumetric Heads [12.710733307422055]
Existing methods require a complex setup for data acquisition, involving multi-view images captured in constrained studio environments.<n>We introduce PanoHair, a model that estimates head geometry as signed distance fields using knowledge distillation from a pre-trained generative teacher model for head synthesis.
arXiv Detail & Related papers (2025-08-26T11:36:14Z) - GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans [4.498049448460985]
We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction.<n>We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles without relying on color information.
arXiv Detail & Related papers (2025-05-08T16:11:09Z) - MonoGSDF: Exploring Monocular Geometric Cues for Gaussian Splatting-Guided Implicit Surface Reconstruction [84.07233691641193]
We introduce MonoGSDF, a novel method that couples primitives with a neural Signed Distance Field (SDF) for high-quality reconstruction.<n>To handle arbitrary-scale scenes, we propose a scaling strategy for robust generalization.<n>Experiments on real-world datasets outperforms prior methods while maintaining efficiency.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - Perm: A Parametric Representation for Multi-Style 3D Hair Modeling [22.790597419351528]
Perm is a learned parametric representation of human 3D hair designed to facilitate various hair-related applications.<n>We leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures.
arXiv Detail & Related papers (2024-07-28T10:05:11Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.<n> UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.<n>Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.