ARAH: Animatable Volume Rendering of Articulated Human SDFs
- URL: http://arxiv.org/abs/2210.10036v1
- Date: Tue, 18 Oct 2022 17:56:59 GMT
- Title: ARAH: Animatable Volume Rendering of Articulated Human SDFs
- Authors: Shaofei Wang and Katja Schwarz and Andreas Geiger and Siyu Tang
- Abstract summary: We propose a model to create animatable clothed human avatars with detailed geometry that generalize well to out-of-distribution poses.
Our algorithm enables efficient point sampling and accurate point canonicalization while generalizing well to unseen poses.
Our method achieves state-of-the-art performance on geometry and appearance reconstruction while creating animatable avatars.
- Score: 37.48271522183636
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Combining human body models with differentiable rendering has recently
enabled animatable avatars of clothed humans from sparse sets of multi-view RGB
videos. While state-of-the-art approaches achieve realistic appearance with
neural radiance fields (NeRF), the inferred geometry often lacks detail due to
missing geometric constraints. Further, animating avatars in
out-of-distribution poses is not yet possible because the mapping from
observation space to canonical space does not generalize faithfully to unseen
poses. In this work, we address these shortcomings and propose a model to
create animatable clothed human avatars with detailed geometry that generalize
well to out-of-distribution poses. To achieve detailed geometry, we combine an
articulated implicit surface representation with volume rendering. For
generalization, we propose a novel joint root-finding algorithm for
simultaneous ray-surface intersection search and correspondence search. Our
algorithm enables efficient point sampling and accurate point canonicalization
while generalizing well to unseen poses. We demonstrate that our proposed
pipeline can generate clothed avatars with high-quality pose-dependent geometry
and appearance from a sparse set of multi-view RGB videos. Our method achieves
state-of-the-art performance on geometry and appearance reconstruction while
creating animatable avatars that generalize well to out-of-distribution poses
beyond the small number of training poses.
Related papers
- HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - Neural Image-based Avatars: Generalizable Radiance Fields for Human
Avatar Modeling [28.242591786838936]
We present a method that enables novel views and novel poses of arbitrary human performers from sparse multi-view images.
A key ingredient of our method is a hybrid appearance blending module that combines the advantages of the implicit body NeRF representation and image-based rendering.
arXiv Detail & Related papers (2023-04-10T23:53:28Z) - AniPixel: Towards Animatable Pixel-Aligned Human Avatar [65.7175527782209]
AniPixel is a novel animatable and generalizable human avatar reconstruction method.
We propose a neural skinning field based on skeleton-driven deformation to establish the target-to-canonical and canonical-to-observation correspondences.
Experiments show that AniPixel renders comparable novel views while delivering better novel pose animation results than state-of-the-art methods.
arXiv Detail & Related papers (2023-02-07T11:04:14Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - Structured Local Radiance Fields for Human Avatar Modeling [40.123537202191564]
We introduce a novel representation on the basis of recent neural scene rendering techniques.
The core of our representation is a set of structured local radiance fields, anchored to the pre-defined nodes sampled on a statistical human body template.
Our method enables automatic construction of animatable human avatars for various types of clothes without the need for scanning subject-specific templates.
arXiv Detail & Related papers (2022-03-28T03:43:52Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.