SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
- URL: http://arxiv.org/abs/2104.03313v2
- Date: Thu, 8 Apr 2021 06:31:02 GMT
- Title: SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
- Authors: Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
- Abstract summary: We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
- Score: 54.94737477860082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SCANimate, an end-to-end trainable framework that takes raw 3D
scans of a clothed human and turns them into an animatable avatar. These
avatars are driven by pose parameters and have realistic clothing that moves
and deforms naturally. SCANimate does not rely on a customized mesh template or
surface mesh registration. We observe that fitting a parametric 3D body model,
like SMPL, to a clothed human scan is tractable while surface registration of
the body topology to the scan is often not, because clothing can deviate
significantly from the body shape. We also observe that articulated
transformations are invertible, resulting in geometric cycle consistency in the
posed and unposed shapes. These observations lead us to a weakly supervised
learning method that aligns scans into a canonical pose by disentangling
articulated deformations without template-based surface registration.
Furthermore, to complete missing regions in the aligned scans while modeling
pose-dependent deformations, we introduce a locally pose-aware implicit
function that learns to complete and model geometry with learned pose
correctives. In contrast to commonly used global pose embeddings, our local
pose conditioning significantly reduces long-range spurious correlations and
improves generalization to unseen poses, especially when training data is
limited. Our method can be applied to pose-aware appearance modeling to
generate a fully textured avatar. We demonstrate our approach on various
clothing types with different amounts of training data, outperforming existing
solutions and other variants in terms of fidelity and generality in every
setting. The code is available at https://scanimate.is.tue.mpg.de.
Related papers
- CloSET: Modeling Clothed Humans on Continuous Surface with Explicit
Template Decomposition [36.39531876183322]
We propose to decompose explicit garment-related templates and then add pose-dependent wrinkles to them.
To tackle the seam artifact issues in recent state-of-the-art point-based methods, we propose to learn point features on a body surface.
Our approach is validated on two existing datasets and our newly introduced dataset, showing better clothing deformation results in unseen poses.
arXiv Detail & Related papers (2023-04-06T15:50:05Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Single-view 3D Body and Cloth Reconstruction under Complex Poses [37.86174829271747]
We extend existing implicit function-based models to deal with images of humans with arbitrary poses and self-occluded limbs.
We learn an implicit function that maps the input image to a 3D body shape with a low level of detail.
We then learn a displacement map, conditioned on the smoothed surface, which encodes the high-frequency details of the clothes and body.
arXiv Detail & Related papers (2022-05-09T07:34:06Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.