NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image
Using Implicit Neural Representations
- URL: http://arxiv.org/abs/2205.04175v1
- Date: Mon, 9 May 2022 10:39:39 GMT
- Title: NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image
Using Implicit Neural Representations
- Authors: Keyu Wu, Yifan Ye, Lingchen Yang, Hongbo Fu, Kun Zhou, Youyi Zheng
- Abstract summary: We introduce NeuralHDHair, a flexible, fully automatic system for modeling high-fidelity hair from a single image.
We propose a novel voxel-aligned implicit function (VIFu) to represent the global hair feature.
To improve the efficiency of a traditional hair growth algorithm, we adopt a local neural implicit function to grow strands based on the estimated 3D hair geometric features.
- Score: 40.14104266690989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Undoubtedly, high-fidelity 3D hair plays an indispensable role in digital
humans. However, existing monocular hair modeling methods are either tricky to
deploy in digital systems (e.g., due to their dependence on complex user
interactions or large databases) or can produce only a coarse geometry. In this
paper, we introduce NeuralHDHair, a flexible, fully automatic system for
modeling high-fidelity hair from a single image. The key enablers of our system
are two carefully designed neural networks: an IRHairNet (Implicit
representation for hair using neural network) for inferring high-fidelity 3D
hair geometric features (3D orientation field and 3D occupancy field)
hierarchically and a GrowingNet(Growing hair strands using neural network) to
efficiently generate 3D hair strands in parallel. Specifically, we perform a
coarse-to-fine manner and propose a novel voxel-aligned implicit function
(VIFu) to represent the global hair feature, which is further enhanced by the
local details extracted from a hair luminance map. To improve the efficiency of
a traditional hair growth algorithm, we adopt a local neural implicit function
to grow strands based on the estimated 3D hair geometric features. Extensive
experiments show that our method is capable of constructing a high-fidelity 3D
hair model from a single image, both efficiently and effectively, and achieves
the-state-of-the-art performance.
Related papers
- Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - HHAvatar: Gaussian Head Avatar with Dynamic Hairs [27.20228210350169]
We proposeAvatar represented by controllable 3D Gaussians for high-fidelity head avatar with dynamic hair modeling.
Our approach outperforms other state-of-the-art sparse-view methods, achieving ultra high-fidelity rendering quality at 2K resolution.
arXiv Detail & Related papers (2023-12-05T11:01:44Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.