Animatable Neural Radiance Fields from Monocular RGB Video
- URL: http://arxiv.org/abs/2106.13629v1
- Date: Fri, 25 Jun 2021 13:32:23 GMT
- Title: Animatable Neural Radiance Fields from Monocular RGB Video
- Authors: Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Huchuan
Lu
- Abstract summary: We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
- Score: 72.6101766407013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present animatable neural radiance fields for detailed human avatar
creation from monocular videos. Our approach extends neural radiance fields
(NeRF) to the dynamic scenes with human movements via introducing explicit
pose-guided deformation while learning the scene representation network. In
particular, we estimate the human pose for each frame and learn a constant
canonical space for the detailed human template, which enables natural shape
deformation from the observation space to the canonical space under the
explicit control of the pose parameters. To compensate for inaccurate pose
estimation, we introduce the pose refinement strategy that updates the initial
pose during the learning process, which not only helps to learn more accurate
human reconstruction but also accelerates the convergence. In experiments we
show that the proposed approach achieves 1) implicit human geometry and
appearance reconstruction with high-quality details, 2) photo-realistic
rendering of the human from arbitrary views, and 3) animation of the human with
arbitrary poses.
Related papers
- Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Towards 4D Human Video Stylization [56.33756124829298]
We present a first step towards 4D (3D and time) human video stylization, which addresses style transfer, novel view synthesis and human animation.
We leverage Neural Radiance Fields (NeRFs) to represent videos, conducting stylization in the rendered feature space.
Our framework uniquely extends its capabilities to accommodate novel poses and viewpoints, making it a versatile tool for creative human video stylization.
arXiv Detail & Related papers (2023-12-07T08:58:33Z) - Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via
Self-supervised Scene Decomposition [40.46674919612935]
We present Vid2Avatar, a method to learn human avatars from monocular in-the-wild videos.
Our method does not require any groundtruth supervision or priors extracted from large datasets of clothed human scans.
It solves the tasks of scene decomposition and surface reconstruction directly in 3D by modeling both the human and the background in the scene jointly.
arXiv Detail & Related papers (2023-02-22T18:59:17Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - Structured Local Radiance Fields for Human Avatar Modeling [40.123537202191564]
We introduce a novel representation on the basis of recent neural scene rendering techniques.
The core of our representation is a set of structured local radiance fields, anchored to the pre-defined nodes sampled on a statistical human body template.
Our method enables automatic construction of animatable human avatars for various types of clothes without the need for scanning subject-specific templates.
arXiv Detail & Related papers (2022-03-28T03:43:52Z) - H-NeRF: Neural Radiance Fields for Rendering and Temporal Reconstruction
of Humans in Motion [42.4185273307021]
We present H-NeRF, neural radiance fields for rendering and temporal (4D) reconstruction of a human in motion as captured by a sparse set of cameras or even from a monocular video.
Our NeRF-inspired approach combines ideas from neural scene representation, novel-view synthesis, and implicit statistical geometric human representations.
arXiv Detail & Related papers (2021-10-26T14:51:36Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar
Reconstruction [9.747648609960185]
We present dynamic neural radiance fields for modeling the appearance and dynamics of a human face.
Especially, for telepresence applications in AR or VR, a faithful reproduction of the appearance including novel viewpoints or head-poses is required.
arXiv Detail & Related papers (2020-12-05T16:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.