LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies
- URL: http://arxiv.org/abs/2111.15113v1
- Date: Tue, 30 Nov 2021 04:10:57 GMT
- Title: LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies
- Authors: Sandro Lombardi, Bangbang Yang, Tianxing Fan, Hujun Bao, Guofeng
Zhang, Marc Pollefeys, Zhaopeng Cui
- Abstract summary: We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
- Score: 78.17425779503047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D representation and reconstruction of human bodies have been studied for a
long time in computer vision. Traditional methods rely mostly on parametric
statistical linear models, limiting the space of possible bodies to linear
combinations. It is only recently that some approaches try to leverage neural
implicit representations for human body modeling, and while demonstrating
impressive results, they are either limited by representation capability or not
physically meaningful and controllable. In this work, we propose a novel neural
implicit representation for the human body, which is fully differentiable and
optimizable with disentangled shape and pose latent spaces. Contrary to prior
work, our representation is designed based on the kinematic model, which makes
the representation controllable for tasks like pose animation, while
simultaneously allowing the optimization of shape and pose for tasks like 3D
fitting and pose tracking. Our model can be trained and fine-tuned directly on
non-watertight raw data with well-designed losses. Experiments demonstrate the
improved 3D reconstruction performance over SoTA approaches and show the
applicability of our method to shape interpolation, model fitting, pose
tracking, and motion retargeting.
Related papers
- Within the Dynamic Context: Inertia-aware 3D Human Modeling with Pose Sequence [47.16903508897047]
In this study, we elucidate that variations in human appearance depend not only on the current frame's pose condition but also on past pose states.
We introduce Dyco, a novel method utilizing the delta pose sequence representation for non-rigid deformations.
In addition, our inertia-aware 3D human method can unprecedentedly simulate appearance changes caused by inertia at different velocities.
arXiv Detail & Related papers (2024-03-28T06:05:14Z) - DANBO: Disentangled Articulated Neural Body Representations via Graph
Neural Networks [12.132886846993108]
High-resolution models enable photo-realistic avatars but at the cost of requiring studio settings not available to end users.
Our goal is to create avatars directly from raw images without relying on expensive studio setups and surface tracking.
We introduce a three-stage method that induces two inductive biases to better disentangled pose-dependent deformation.
arXiv Detail & Related papers (2022-05-03T17:56:46Z) - H4D: Human 4D Modeling by Learning Neural Compositional Representation [75.34798886466311]
This work presents a novel framework that can effectively learn a compact and compositional representation for dynamic human.
A simple yet effective linear motion model is proposed to provide a rough and regularized motion estimation.
Experiments demonstrate our method is not only efficacy in recovering dynamic human with accurate motion and detailed geometry, but also amenable to various 4D human related tasks.
arXiv Detail & Related papers (2022-03-02T17:10:49Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.