Structured Local Radiance Fields for Human Avatar Modeling
- URL: http://arxiv.org/abs/2203.14478v1
- Date: Mon, 28 Mar 2022 03:43:52 GMT
- Title: Structured Local Radiance Fields for Human Avatar Modeling
- Authors: Zerong Zheng, Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu
- Abstract summary: We introduce a novel representation on the basis of recent neural scene rendering techniques.
The core of our representation is a set of structured local radiance fields, anchored to the pre-defined nodes sampled on a statistical human body template.
Our method enables automatic construction of animatable human avatars for various types of clothes without the need for scanning subject-specific templates.
- Score: 40.123537202191564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is extremely challenging to create an animatable clothed human avatar from
RGB videos, especially for loose clothes due to the difficulties in motion
modeling. To address this problem, we introduce a novel representation on the
basis of recent neural scene rendering techniques. The core of our
representation is a set of structured local radiance fields, which are anchored
to the pre-defined nodes sampled on a statistical human body template. These
local radiance fields not only leverage the flexibility of implicit
representation in shape and appearance modeling, but also factorize cloth
deformations into skeleton motions, node residual translations and the dynamic
detail variations inside each individual radiance field. To learn our
representation from RGB data and facilitate pose generalization, we propose to
learn the node translations and the detail variations in a conditional
generative latent space. Overall, our method enables automatic construction of
animatable human avatars for various types of clothes without the need for
scanning subject-specific templates, and can generate realistic images with
dynamic details for novel poses. Experiment show that our method outperforms
state-of-the-art methods both qualitatively and quantitatively.
Related papers
- AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - X-Avatar: Expressive Human Avatars [33.24502928725897]
We present X-Avatar, a novel avatar model that captures the full expressiveness of digital humans to bring about life-like experiences in telepresence, AR/VR and beyond.
Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data.
arXiv Detail & Related papers (2023-03-08T18:59:39Z) - ARAH: Animatable Volume Rendering of Articulated Human SDFs [37.48271522183636]
We propose a model to create animatable clothed human avatars with detailed geometry that generalize well to out-of-distribution poses.
Our algorithm enables efficient point sampling and accurate point canonicalization while generalizing well to unseen poses.
Our method achieves state-of-the-art performance on geometry and appearance reconstruction while creating animatable avatars.
arXiv Detail & Related papers (2022-10-18T17:56:59Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z) - Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar
Reconstruction [9.747648609960185]
We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face.
Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required.
arXiv Detail & Related papers (2020-12-05T16:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.