Neural Point-based Shape Modeling of Humans in Challenging Clothing
- URL: http://arxiv.org/abs/2209.06814v1
- Date: Wed, 14 Sep 2022 17:59:17 GMT
- Title: Neural Point-based Shape Modeling of Humans in Challenging Clothing
- Authors: Qianli Ma, Jinlong Yang, Michael J. Black, Siyu Tang
- Abstract summary: Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
- Score: 75.75870953766935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parametric 3D body models like SMPL only represent minimally-clothed people
and are hard to extend to clothing because they have a fixed mesh topology and
resolution. To address these limitations, recent work uses implicit surfaces or
point clouds to model clothed bodies. While not limited by topology, such
methods still struggle to model clothing that deviates significantly from the
body, such as skirts and dresses. This is because they rely on the body to
canonicalize the clothed surface by reposing it to a reference shape.
Unfortunately, this process is poorly defined when clothing is far from the
body. Additionally, they use linear blend skinning to pose the body and the
skinning weights are tied to the underlying body parts. In contrast, we model
the clothing deformation in a local coordinate space without canonicalization.
We also relax the skinning weights to let multiple body parts influence the
surface. Specifically, we extend point-based methods with a coarse stage, that
replaces canonicalization with a learned pose-independent "coarse shape" that
can capture the rough surface geometry of clothing like skirts. We then refine
this using a network that infers the linear blend skinning weights and pose
dependent displacements from the coarse representation. The approach works well
for garments that both conform to, and deviate from, the body. We demonstrate
the usefulness of our approach by learning person-specific avatars from
examples and then show how they can be animated in new poses and motions. We
also show that the method can learn directly from raw scans with missing data,
greatly simplifying the process of creating realistic avatars. Code is
available for research purposes at
{\small\url{https://qianlim.github.io/SkiRT}}.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - CloSET: Modeling Clothed Humans on Continuous Surface with Explicit
Template Decomposition [36.39531876183322]
We propose to decompose explicit garment-related templates and then add pose-dependent wrinkles to them.
To tackle the seam artifact issues in recent state-of-the-art point-based methods, we propose to learn point features on a body surface.
Our approach is validated on two existing datasets and our newly introduced dataset, showing better clothing deformation results in unseen poses.
arXiv Detail & Related papers (2023-04-06T15:50:05Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - Significance of Skeleton-based Features in Virtual Try-On [3.7552180803118325]
The idea of textitVirtual Try-ON (VTON) benefits e-retailing by giving an user the convenience of trying a clothing at the comfort of their home.
Most of the existing VTON methods produce inconsistent results when a person posing with his arms folded.
We propose two learning-based modules: a synthesizer network and a mask prediction network.
arXiv Detail & Related papers (2022-08-17T05:24:03Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.