Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos
- URL: http://arxiv.org/abs/2203.08133v4
- Date: Thu, 4 May 2023 07:59:50 GMT
- Title: Animatable Implicit Neural Representations for Creating Realistic
Avatars from Videos
- Authors: Sida Peng, Zhen Xu, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing
Shuai, Hujun Bao, Xiaowei Zhou
- Abstract summary: This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce a pose-driven deformation field based on the linear blend skinning algorithm.
We show that our approach significantly outperforms recent human modeling methods.
- Score: 63.16888987770885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the challenge of reconstructing an animatable human
model from a multi-view video. Some recent works have proposed to decompose a
non-rigidly deforming scene into a canonical neural radiance field and a set of
deformation fields that map observation-space points to the canonical space,
thereby enabling them to learn the dynamic scene from images. However, they
represent the deformation field as translational vector field or SE(3) field,
which makes the optimization highly under-constrained. Moreover, these
representations cannot be explicitly controlled by input motions. Instead, we
introduce a pose-driven deformation field based on the linear blend skinning
algorithm, which combines the blend weight field and the 3D human skeleton to
produce observation-to-canonical correspondences. Since 3D human skeletons are
more observable, they can regularize the learning of the deformation field.
Moreover, the pose-driven deformation field can be controlled by input skeletal
motions to generate new deformation fields to animate the canonical human
model. Experiments show that our approach significantly outperforms recent
human modeling methods. The code is available at
https://zju3dv.github.io/animatable_nerf/.
Related papers
- Deformable 3D Gaussian Splatting for Animatable Human Avatars [50.61374254699761]
We propose a fully explicit approach to construct a digital avatar from as little as a single monocular sequence.
ParDy-Human constitutes an explicit model for realistic dynamic human avatars which requires significantly fewer training views and images.
Our avatars learning is free of additional annotations such as Splat masks and can be trained with variable backgrounds while inferring full-resolution images efficiently even on consumer hardware.
arXiv Detail & Related papers (2023-12-22T20:56:46Z) - Point-Based Radiance Fields for Controllable Human Motion Synthesis [7.322100850632633]
This paper proposes a controllable human motion synthesis method for fine-level deformation based on static point-based radiance fields.
Our method exploits the explicit point cloud to train the static 3D scene and apply the deformation by encoding the point cloud translation.
Our approach can significantly outperform the state-of-the-art on fine-level complex deformation which can be generalized to other 3D characters besides humans.
arXiv Detail & Related papers (2023-10-05T08:27:33Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Animatable Neural Radiance Fields for Human Body Modeling [54.41477114385557]
This paper addresses the challenge of reconstructing an animatable human model from a multi-view video.
We introduce neural blend weight fields to produce the deformation fields.
Experiments show that our approach significantly outperforms recent human methods.
arXiv Detail & Related papers (2021-05-06T17:58:13Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.