Animatable Neural Radiance Fields for Human Body Modeling
- URL: http://arxiv.org/abs/2105.02872v1
- Date: Thu, 6 May 2021 17:58:13 GMT
- Title: Animatable Neural Radiance Fields for Human Body Modeling
- Authors: Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai,
Hujun Bao, Xiaowei Zhou
- Abstract summary: This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce neural blend weight fields to produce the deformation fields.
Experiments show that our approach significantly outperforms recent human methods.
- Score: 54.41477114385557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the challenge of reconstructing an animatable human
model from a multi-view video. Some recent works have proposed to decompose a
dynamic scene into a canonical neural radiance field and a set of deformation
fields that map observation-space points to the canonical space, thereby
enabling them to learn the dynamic scene from images. However, they represent
the deformation field as translational vector field or SE(3) field, which makes
the optimization highly under-constrained. Moreover, these representations
cannot be explicitly controlled by input motions. Instead, we introduce neural
blend weight fields to produce the deformation fields. Based on the
skeleton-driven deformation, blend weight fields are used with 3D human
skeletons to generate observation-to-canonical and canonical-to-observation
correspondences. Since 3D human skeletons are more observable, they can
regularize the learning of deformation fields. Moreover, the learned blend
weight fields can be combined with input skeletal motions to generate new
deformation fields to animate the human model. Experiments show that our
approach significantly outperforms recent human synthesis methods. The code
will be available at https://zju3dv.github.io/animatable_nerf/.
Related papers
- Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - Point-Based Radiance Fields for Controllable Human Motion Synthesis [7.322100850632633]
This paper proposes a controllable human motion synthesis method for fine-level deformation based on static point-based radiance fields.
Our method exploits the explicit point cloud to train the static 3D scene and apply the deformation by encoding the point cloud translation.
Our approach can significantly outperform the state-of-the-art on fine-level complex deformation which can be generalized to other 3D characters besides humans.
arXiv Detail & Related papers (2023-10-05T08:27:33Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - NDF: Neural Deformable Fields for Dynamic Human Modelling [5.029703921995977]
We propose Neural Deformable Fields (NDF), a new representation for dynamic human digitization from a multi-view video.
Recent works proposed to represent a dynamic human body with shared canonical neural radiance fields which links to the observation space with deformation fields estimations.
In this paper, we propose to learn a neural deformable field wrapped around a fitted parametric body model to represent the dynamic human.
arXiv Detail & Related papers (2022-07-19T10:55:41Z) - Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos [63.16888987770885]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
arXiv Detail & Related papers (2022-03-15T17:56:59Z) - Learning Multi-Object Dynamics with Compositional Neural Radiance Fields [63.424469458529906]
We present a method to learn compositional predictive models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.
NeRFs have become a popular choice for representing scenes due to their strong 3D prior.
For planning, we utilize RRTs in the learned latent space, where we can exploit our model and the implicit object encoder to make sampling the latent space informative and more efficient.
arXiv Detail & Related papers (2022-02-24T01:31:29Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.