GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians
- URL: http://arxiv.org/abs/2402.10483v1
- Date: Fri, 16 Feb 2024 07:13:24 GMT
- Title: GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians
- Authors: Haimin Luo, Min Ouyang, Zijun Zhao, Suyi Jiang, Longwen Zhang, Qixuan
Zhang, Wei Yang, Lan Xu, Jingyi Yu
- Abstract summary: This paper presents GaussianHair, a novel explicit hair representation.
It enables comprehensive modeling of hair geometry and appearance from images, fostering innovative illumination effects and dynamic animation capabilities.
We further enhance this model with the "GaussianHair Scattering Model", adept at recreating the slender structure of hair strands and accurately capturing their local diffuse color in uniform lighting.
- Score: 41.52673678183542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hairstyle reflects culture and ethnicity at first glance. In the digital era,
various realistic human hairstyles are also critical to high-fidelity digital
human assets for beauty and inclusivity. Yet, realistic hair modeling and
real-time rendering for animation is a formidable challenge due to its sheer
number of strands, complicated structures of geometry, and sophisticated
interaction with light. This paper presents GaussianHair, a novel explicit hair
representation. It enables comprehensive modeling of hair geometry and
appearance from images, fostering innovative illumination effects and dynamic
animation capabilities. At the heart of GaussianHair is the novel concept of
representing each hair strand as a sequence of connected cylindrical 3D
Gaussian primitives. This approach not only retains the hair's geometric
structure and appearance but also allows for efficient rasterization onto a 2D
image plane, facilitating differentiable volumetric rendering. We further
enhance this model with the "GaussianHair Scattering Model", adept at
recreating the slender structure of hair strands and accurately capturing their
local diffuse color in uniform lighting. Through extensive experiments, we
substantiate that GaussianHair achieves breakthroughs in both geometric and
appearance fidelity, transcending the limitations encountered in
state-of-the-art methods for hair reconstruction. Beyond representation,
GaussianHair extends to support editing, relighting, and dynamic rendering of
hair, offering seamless integration with conventional CG pipeline workflows.
Complementing these advancements, we have compiled an extensive dataset of real
human hair, each with meticulously detailed strand geometry, to propel further
research in this field.
Related papers
- Human Hair Reconstruction with Strand-Aligned 3D Gaussians [39.32397354314153]
We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians.
In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands.
Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
arXiv Detail & Related papers (2024-09-23T07:49:46Z) - GaussianStyle: Gaussian Head Avatar via StyleGAN [64.85782838199427]
We propose a novel framework that integrates the volumetric strengths of 3DGS with the powerful implicit representation of StyleGAN.
We show that our method achieves state-of-the-art performance in reenactment, novel view synthesis, and animation.
arXiv Detail & Related papers (2024-02-01T18:14:42Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [51.46168990249278]
We present an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
GustafAvatar is validated on both the public dataset and our collected dataset.
arXiv Detail & Related papers (2023-12-04T18:55:45Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction [4.714310894654027]
This work proposes an approach capable of accurate hair geometry reconstruction at a strand level from a monocular video or multi-view images captured in uncontrolled conditions.
The combined system, named Neural Haircut, achieves high realism and personalization of the reconstructed hairstyles.
arXiv Detail & Related papers (2023-06-09T13:08:34Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z) - NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image
Using Implicit Neural Representations [40.14104266690989]
We introduce NeuralHDHair, a flexible, fully automatic system for modeling high-fidelity hair from a single image.
We propose a novel voxel-aligned implicit function (VIFu) to represent the global hair feature.
To improve the efficiency of a traditional hair growth algorithm, we adopt a local neural implicit function to grow strands based on the estimated 3D hair geometric features.
arXiv Detail & Related papers (2022-05-09T10:39:39Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.