HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture
- URL: http://arxiv.org/abs/2112.06904v2
- Date: Wed, 15 Dec 2021 00:46:24 GMT
- Title: HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture
- Authors: Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Michael
Zollhoefer, Jessica Hodgins, Christoph Lassner
- Abstract summary: Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
- Score: 11.645769995924548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capturing and rendering life-like hair is particularly challenging due to its
fine geometric structure, the complex physical interaction and its non-trivial
visual appearance.Yet, hair is a critical component for believable avatars. In
this paper, we address the aforementioned problems: 1) we use a novel,
volumetric hair representation that is com-posed of thousands of primitives.
Each primitive can be rendered efficiently, yet realistically, by building on
the latest advances in neural rendering. 2) To have a reliable control signal,
we present a novel way of tracking hair on the strand level. To keep the
computational effort manageable, we use guide hairs and classic techniques to
expand those into a dense hood of hair. 3) To better enforce temporal
consistency and generalization ability of our model, we further optimize the 3D
scene flow of our representation with multi-view optical flow, using volumetric
ray marching. Our method can not only create realistic renders of recorded
multi-view sequences, but also create renderings for new hair configurations by
providing new control signals. We compare our method with existing work on
viewpoint synthesis and drivable animation and achieve state-of-the-art
results. Please check out our project website at
https://ziyanw1.github.io/hvh/.
Related papers
- MonoHair: High-Fidelity Hair Modeling from a Monocular Video [40.27026803872373]
MonoHair is a generic framework to achieve high-fidelity hair reconstruction from a monocular video.
Our approach bifurcates the hair modeling process into two main stages: precise exterior reconstruction and interior structure inference.
Our experiments demonstrate that our method exhibits robustness across diverse hairstyles and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-03-27T08:48:47Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Text-Guided Generation and Editing of Compositional 3D Avatars [59.584042376006316]
Our goal is to create a realistic 3D facial avatar with hair and accessories using only a text description.
Existing methods either lack realism, produce unrealistic shapes, or do not support editing.
arXiv Detail & Related papers (2023-09-13T17:59:56Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - Efficient Meshy Neural Fields for Animatable Human Avatars [87.68529918184494]
Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
arXiv Detail & Related papers (2023-03-23T00:15:34Z) - NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation [23.625243364572867]
The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality.
We present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner.
Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
arXiv Detail & Related papers (2022-12-01T16:09:54Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z) - NeuralHDHair: Automatic High-fidelity Hair Modeling from a Single Image
Using Implicit Neural Representations [40.14104266690989]
We introduce NeuralHDHair, a flexible, fully automatic system for modeling high-fidelity hair from a single image.
We propose a novel voxel-aligned implicit function (VIFu) to represent the global hair feature.
To improve the efficiency of a traditional hair growth algorithm, we adopt a local neural implicit function to grow strands based on the estimated 3D hair geometric features.
arXiv Detail & Related papers (2022-05-09T10:39:39Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.