H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction
of Humans in Motion
- URL: http://arxiv.org/abs/2110.13746v1
- Date: Tue, 26 Oct 2021 14:51:36 GMT
- Title: H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction
of Humans in Motion
- Authors: Hongyi Xu, Thiemo Alldieck, Cristian Sminchisescu
- Abstract summary: We present H-NeRF, neural radiance fields for rendering and temporal (4D) reconstruction of a human in motion as captured by a sparse set of cameras or even from a monocular video.
Our NeRF-inspired approach combines ideas from neural scene representation, novel-view synthesis, and implicit statistical geometric human representations.
- Score: 42.4185273307021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present H-NeRF, neural radiance fields for rendering and temporal (4D)
reconstruction of a human in motion as captured by a sparse set of cameras or
even from a monocular video. Our NeRF-inspired approach combines ideas from
neural scene representation, novel-view synthesis, and implicit statistical
geometric human representations. H-NeRF allows to accurately synthesize images
of the observed subject under novel camera views and human poses. Instead of
learning a radiance field in empty space, we attach it to a structured implicit
human body model, represented using signed distance functions. This allows us
to robustly fuse information from sparse views and, at test time, to
extrapolate beyond the observed poses or views. Moreover, we apply geometric
constraints to co-learn the structure of the observed subject (including both
body and clothing) and to regularize the radiance field to geometrical
plausible solutions. Extensive experiments on multiple datasets demonstrate the
robustness and accuracy of our approach and its generalization capabilities
beyond the sparse training set of poses and views.
Related papers
- NeRF-VO: Real-Time Sparse Visual Odometry with Neural Radiance Fields [13.178099653374945]
NeRF-VO integrates learning-based sparse visual odometry for low-latency camera tracking and a neural radiance scene representation.
We surpass SOTA methods in pose estimation accuracy, novel view fidelity, and dense reconstruction quality across a variety of synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T22:42:17Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - Neural Human Performer: Learning Generalizable Radiance Fields for Human
Performance Rendering [34.80975358673563]
We propose a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
Experiments on the ZJU-MoCap and AIST datasets show that our method significantly outperforms recent generalizable NeRF methods on unseen identities and poses.
arXiv Detail & Related papers (2021-09-15T17:32:46Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.