PanoHair: Detailed Hair Strand Synthesis on Volumetric Heads
- URL: http://arxiv.org/abs/2508.18944v1
- Date: Tue, 26 Aug 2025 11:36:14 GMT
- Title: PanoHair: Detailed Hair Strand Synthesis on Volumetric Heads
- Authors: Shashikant Verma, Shanmuganathan Raman,
- Abstract summary: Existing methods require a complex setup for data acquisition, involving multi-view images captured in constrained studio environments.<n>We introduce PanoHair, a model that estimates head geometry as signed distance fields using knowledge distillation from a pre-trained generative teacher model for head synthesis.
- Score: 12.710733307422055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving realistic hair strand synthesis is essential for creating lifelike digital humans, but producing high-fidelity hair strand geometry remains a significant challenge. Existing methods require a complex setup for data acquisition, involving multi-view images captured in constrained studio environments. Additionally, these methods have longer hair volume estimation and strand synthesis times, which hinder efficiency. We introduce PanoHair, a model that estimates head geometry as signed distance fields using knowledge distillation from a pre-trained generative teacher model for head synthesis. Our approach enables the prediction of semantic segmentation masks and 3D orientations specifically for the hair region of the estimated geometry. Our method is generative and can generate diverse hairstyles with latent space manipulations. For real images, our approach involves an inversion process to infer latent codes and produces visually appealing hair strands, offering a streamlined alternative to complex multi-view data acquisition setups. Given the latent code, PanoHair generates a clean manifold mesh for the hair region in under 5 seconds, along with semantic and orientation maps, marking a significant improvement over existing methods, as demonstrated in our experiments.
Related papers
- HairGS: Hair Strand Reconstruction based on 3D Gaussian Splatting [50.93221272778306]
Human hair reconstruction is a challenging problem in computer vision.<n>We extend the 3DGS framework to enable strand-level hair geometry reconstruction from multi-view images.<n>Our method robustly handles a wide range of hairstyles and achieves efficient reconstruction, typically completing within one hour.
arXiv Detail & Related papers (2025-09-09T14:08:41Z) - Im2Haircut: Single-view Strand-based Hair Reconstruction for Human Avatars [60.99229760565975]
We present a novel approach for 3D hair reconstruction from single photographs based on a global hair prior combined with local optimization.<n>We exploit this prior to create a Gaussian-splatting-based reconstruction method that creates hairstyles from one or more images.
arXiv Detail & Related papers (2025-09-01T13:38:08Z) - StrandDesigner: Towards Practical Strand Generation with Sketch Guidance [69.14408387191172]
We propose the first sketch-based strand generation model, which offers finer control while remaining user-friendly.<n>Our framework tackles key challenges, such as modeling complex strand interactions and diverse sketch patterns.<n> Experiments on several benchmark datasets show our method outperforms existing approaches in realism and precision.
arXiv Detail & Related papers (2025-08-03T08:17:50Z) - DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models [53.08138861924767]
We propose DiffLocks, a novel framework that enables reconstruction of a wide variety of hairstyles directly from a single image.<n>First, we address the lack of 3D hair data by automating the creation of the largest synthetic hair dataset to date, containing 40K hairstyles.<n>By using a pretrained image backbone, our method generalizes to in-the-wild images despite being trained only on synthetic data.
arXiv Detail & Related papers (2025-05-09T16:16:42Z) - GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans [4.498049448460985]
We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction.<n>We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles without relying on color information.
arXiv Detail & Related papers (2025-05-08T16:11:09Z) - TANGLED: Generating 3D Hair Strands from Images with Arbitrary Styles and Viewpoints [38.95048174663582]
Existing text or image-guided generation methods fail to handle the richness and complexity of diverse styles.<n>We present TANGLED, a novel approach for 3D hair strand generation that accommodates diverse image inputs across styles, viewpoints, and quantities of input views.
arXiv Detail & Related papers (2025-02-10T12:26:02Z) - Synthetic Prior for Few-Shot Drivable Head Avatar Inversion [61.51887011274453]
We present SynShot, a novel method for the few-shot inversion of a drivable head avatar based on a synthetic prior.<n>Inspired by machine learning models trained solely on synthetic data, we propose a method that learns a prior model from a large dataset of synthetic heads.
arXiv Detail & Related papers (2025-01-12T19:01:05Z) - Perm: A Parametric Representation for Multi-Style 3D Hair Modeling [22.790597419351528]
Perm is a learned parametric representation of human 3D hair designed to facilitate various hair-related applications.<n>We leverage our strand representation to fit and decompose hair geometry textures into low- to high-frequency hair structures.
arXiv Detail & Related papers (2024-07-28T10:05:11Z) - Dr.Hair: Reconstructing Scalp-Connected Hair Strands without Pre-training via Differentiable Rendering of Line Segments [23.71057752711745]
In the film and gaming industries, achieving a realistic hair appearance typically involves the use of strands originating from the scalp.
In this study, we propose an optimization-based approach that eliminates the need for pre-training.
Our method exhibits robust and accurate inverse rendering, surpassing the quality of existing methods and significantly improving processing speed.
arXiv Detail & Related papers (2024-03-26T08:53:25Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.