S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
- URL: http://arxiv.org/abs/2101.06571v1
- Date: Sun, 17 Jan 2021 02:16:56 GMT
- Title: S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
- Authors: Ze Yang, Shenlong Wang, Sivabalan Manivasagam, Zeng Huang, Wei-Chiu
Ma, Xinchen Yan, Ersin Yumer, Raquel Urtasun
- Abstract summary: We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
- Score: 103.65625425020129
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Constructing and animating humans is an important component for building
virtual worlds in a wide variety of applications such as virtual reality or
robotics testing in simulation. As there are exponentially many variations of
humans with different shape, pose and clothing, it is critical to develop
methods that can automatically reconstruct and animate humans at scale from
real world data. Towards this goal, we represent the pedestrian's shape, pose
and skinning weights as neural implicit functions that are directly learned
from data. This representation enables us to handle a wide variety of different
pedestrian shapes and poses without explicitly fitting a human parametric body
model, allowing us to handle a wider range of human geometries and topologies.
We demonstrate the effectiveness of our approach on various datasets and show
that our reconstructions outperform existing state-of-the-art methods.
Furthermore, our re-animation experiments show that we can generate 3D human
animations at scale from a single RGB image (and/or an optional LiDAR sweep) as
input.
Related papers
- Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance [25.346255905155424]
We introduce a methodology for human image animation by leveraging a 3D human parametric model within a latent diffusion framework.
By representing the 3D human parametric model as the motion guidance, we can perform parametric shape alignment of the human body between the reference image and the source video motion.
Our approach also exhibits superior generalization capabilities on the proposed in-the-wild dataset.
arXiv Detail & Related papers (2024-03-21T18:52:58Z) - DreamHuman: Animatable 3D Avatars from Text [41.30635787166307]
We present DreamHuman, a method to generate realistic animatable 3D human avatar models solely from textual descriptions.
Our 3D models have diverse appearance, clothing, skin tones and body shapes, and significantly outperform both generic text-to-3D approaches and previous text-based 3D avatar generators in visual fidelity.
arXiv Detail & Related papers (2023-06-15T17:58:21Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - HSPACE: Synthetic Parametric Humans Animated in Complex Environments [67.8628917474705]
We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
arXiv Detail & Related papers (2021-12-23T22:27:55Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.