LEAP: Learning Articulated Occupancy of People
- URL: http://arxiv.org/abs/2104.06849v1
- Date: Wed, 14 Apr 2021 13:41:56 GMT
- Title: LEAP: Learning Articulated Occupancy of People
- Authors: Marko Mihajlovic, Yan Zhang, Michael J. Black, Siyu Tang
- Abstract summary: We introduce LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body.
Given a set of bone transformations and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions.
LEAP efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose-dependent deformations in the canonical space.
- Score: 56.35797895609303
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Substantial progress has been made on modeling rigid 3D objects using deep
implicit representations. Yet, extending these methods to learn neural models
of human shape is still in its infancy. Human bodies are complex and the key
challenge is to learn a representation that generalizes such that it can
express body shape deformations for unseen subjects in unseen,
highly-articulated, poses. To address this challenge, we introduce LEAP
(LEarning Articulated occupancy of People), a novel neural occupancy
representation of the human body. Given a set of bone transformations (i.e.
joint locations and rotations) and a query point in space, LEAP first maps the
query point to a canonical space via learned linear blend skinning (LBS)
functions and then efficiently queries the occupancy value via an occupancy
network that models accurate identity- and pose-dependent deformations in the
canonical space. Experiments show that our canonicalized occupancy estimation
with the learned LBS functions greatly improves the generalization capability
of the learned occupancy representation across various human shapes and poses,
outperforming existing solutions in all settings.
Related papers
- Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - Locally Aware Piecewise Transformation Fields for 3D Human Mesh
Registration [67.69257782645789]
We propose piecewise transformation fields that learn 3D translation vectors to map any query point in posed space to its correspond position in rest-pose space.
We show that fitting parametric models with poses by our network results in much better registration quality, especially for extreme poses.
arXiv Detail & Related papers (2021-04-16T15:16:09Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.