NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation
- URL: http://arxiv.org/abs/2212.00613v3
- Date: Thu, 12 Oct 2023 00:27:09 GMT
- Title: NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and
Animation
- Authors: Ziyan Wang, Giljoo Nam, Tuur Stuyck, Stephen Lombardi, Chen Cao, Jason
Saragih, Michael Zollhoefer, Jessica Hodgins and Christoph Lassner
- Abstract summary: The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality.
We present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner.
Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal.
- Score: 23.625243364572867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The capture and animation of human hair are two of the major challenges in
the creation of realistic avatars for the virtual reality. Both problems are
highly challenging, because hair has complex geometry and appearance, as well
as exhibits challenging motion. In this paper, we present a two-stage approach
that models hair independently from the head to address these challenges in a
data-driven manner. The first stage, state compression, learns a
low-dimensional latent space of 3D hair states containing motion and
appearance, via a novel autoencoder-as-a-tracker strategy. To better
disentangle the hair and head in appearance learning, we employ multi-view hair
segmentation masks in combination with a differentiable volumetric renderer.
The second stage learns a novel hair dynamics model that performs temporal hair
transfer based on the discovered latent codes. To enforce higher stability
while driving our dynamics model, we employ the 3D point-cloud autoencoder from
the compression stage for de-noising of the hair state. Our model outperforms
the state of the art in novel view synthesis and is capable of creating novel
hair animations without having to rely on hair observations as a driving
signal. Project page is here https://ziyanw1.github.io/neuwigs/.
Related papers
- Synthetic Prior for Few-Shot Drivable Head Avatar Inversion [61.51887011274453]
We present SynShot, a novel method for the few-shot inversion of a drivable head avatar based on a synthetic prior.
Inspired by machine learning models trained solely on synthetic data, we propose a method that learns a prior model from a large dataset of synthetic heads.
We model the head avatar using 3D Gaussian splatting and a convolutional encoder-decoder that outputs Gaussian parameters in UV texture space.
arXiv Detail & Related papers (2025-01-12T19:01:05Z) - StrandHead: Text to Strand-Disentangled 3D Head Avatars Using Hair Geometric Priors [33.00657081996672]
StrandHead is a novel text to 3D head avatar generation method capable of generating disentangled 3D hair with strand representation.
We show that StrandHead achieves the state-of-the-art reality and diversity of generated 3D head and hair.
The generated 3D hair can also be easily implemented in the Unreal Engine for physical simulation and other applications.
arXiv Detail & Related papers (2024-12-16T09:17:36Z) - SimAvatar: Simulation-Ready Avatars with Layered Hair and Clothing [59.44721317364197]
We introduce SimAvatar, a framework designed to generate simulation-ready clothed 3D human avatars from a text prompt.
Our method is the first to produce highly realistic, fully simulation-ready 3D avatars, surpassing the capabilities of current approaches.
arXiv Detail & Related papers (2024-12-12T18:35:26Z) - MonoHair: High-Fidelity Hair Modeling from a Monocular Video [40.27026803872373]
MonoHair is a generic framework to achieve high-fidelity hair reconstruction from a monocular video.
Our approach bifurcates the hair modeling process into two main stages: precise exterior reconstruction and interior structure inference.
Our experiments demonstrate that our method exhibits robustness across diverse hairstyles and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-03-27T08:48:47Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - Controllable Radiance Fields for Dynamic Face Synthesis [125.48602100893845]
We study how to explicitly control generative model synthesis of face dynamics exhibiting non-rigid motion.
Controllable Radiance Field (CoRF)
On head image/video data we show that CoRFs are 3D-aware while enabling editing of identity, viewing directions, and motion.
arXiv Detail & Related papers (2022-10-11T23:17:31Z) - HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair
Performance Capture [11.645769995924548]
Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.
In this paper, we use a novel, volumetric hair representation that is com-posed of thousands of primitives.
Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals.
arXiv Detail & Related papers (2021-12-13T18:57:50Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.