SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements
- URL: http://arxiv.org/abs/2104.07660v1
- Date: Thu, 15 Apr 2021 17:59:39 GMT
- Title: SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements
- Authors: Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
- Abstract summary: Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
- Score: 62.652588951757764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to model and reconstruct humans in clothing is challenging due to
articulation, non-rigid deformation, and varying clothing types and topologies.
To enable learning, the choice of representation is the key. Recent work uses
neural networks to parameterize local surface elements. This approach captures
locally coherent geometry and non-planar details, can deal with varying
topology, and does not require registered training data. However, naively using
such methods to model 3D clothed humans fails to capture fine-grained local
deformations and generalizes poorly. To address this, we present three key
innovations: First, we deform surface elements based on a human body model such
that large-scale deformations caused by articulation are explicitly separated
from topological changes and local clothing deformations. Second, we address
the limitations of existing neural surface elements by regressing local
geometry from local features, significantly improving the expressiveness.
Third, we learn a pose embedding on a 2D parameterization space that encodes
posed body geometry, improving generalization to unseen poses by reducing
non-local spurious correlations. We demonstrate the efficacy of our surface
representation by learning models of complex clothing from point clouds. The
clothing can change topology and deviate from the topology of the body. Once
learned, we can animate previously unseen motions, producing high-quality point
clouds, from which we generate realistic images with neural rendering. We
assess the importance of each technical contribution and show that our approach
outperforms the state-of-the-art methods in terms of reconstruction accuracy
and inference time. The code is available for research purposes at
https://qianlim.github.io/SCALE .
Related papers
- PGAHum: Prior-Guided Geometry and Appearance Learning for High-Fidelity Animatable Human Reconstruction [9.231326291897817]
We introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses.
arXiv Detail & Related papers (2024-04-22T04:22:30Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - NiLBS: Neural Inverse Linear Blend Skinning [59.22647012489496]
We introduce a method to invert the deformations undergone via traditional skinning techniques via a neural network parameterized by pose.
The ability to invert these deformations allows values (e.g., distance function, signed distance function, occupancy) to be pre-computed at rest pose, and then efficiently queried when the character is deformed.
arXiv Detail & Related papers (2020-04-06T20:46:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.