SMPLicit: Topology-aware Generative Model for Clothed People
- URL: http://arxiv.org/abs/2103.06871v1
- Date: Thu, 11 Mar 2021 18:57:03 GMT
- Title: SMPLicit: Topology-aware Generative Model for Clothed People
- Authors: Enric Corona, Albert Pumarola, Guillem Aleny\`a, Gerard Pons-Moll,
Francesc Moreno-Noguer
- Abstract summary: We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
- Score: 65.84665248796615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we introduce SMPLicit, a novel generative model to jointly
represent body pose, shape and clothing geometry. In contrast to existing
learning-based approaches that require training specific models for each type
of garment, SMPLicit can represent in a unified manner different garment
topologies (e.g. from sleeveless tops to hoodies and to open jackets), while
controlling other properties like the garment size or tightness/looseness. We
show our model to be applicable to a large variety of garments including
T-shirts, hoodies, jackets, shorts, pants, skirts, shoes and even hair. The
representation flexibility of SMPLicit builds upon an implicit model
conditioned with the SMPL human body parameters and a learnable latent space
which is semantically interpretable and aligned with the clothing attributes.
The proposed model is fully differentiable, allowing for its use into larger
end-to-end trainable systems. In the experimental section, we demonstrate
SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in
images of dressed people. In both cases we are able to go beyond state of the
art, by retrieving complex garment geometries, handling situations with
multiple clothing layers and providing a tool for easy outfit editing. To
stimulate further research in this direction, we will make our code and model
publicly available at http://www.iri.upc.edu/people/ecorona/smplicit/.
Related papers
- Neural-ABC: Neural Parametric Models for Articulated Body with Clothes [29.04941764336255]
We introduce Neural-ABC, a novel model that can represent clothed human bodies with disentangled latent spaces for identity, clothing, shape, and pose.
Our model excels at disentangling clothing and identity in different shape and poses while preserving the style of the clothing.
Compared to other state-of-the-art parametric models, Neural-ABC demonstrates powerful advantages in the reconstruction of clothed human bodies.
arXiv Detail & Related papers (2024-04-06T16:29:10Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.