CloSET: Modeling Clothed Humans on Continuous Surface with Explicit
Template Decomposition
- URL: http://arxiv.org/abs/2304.03167v1
- Date: Thu, 6 Apr 2023 15:50:05 GMT
- Title: CloSET: Modeling Clothed Humans on Continuous Surface with Explicit
Template Decomposition
- Authors: Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng,
Han Huang, Yandong Guo, Yebin Liu
- Abstract summary: We propose to decompose explicit garment-related templates and then add pose-dependent wrinkles to them.
To tackle the seam artifact issues in recent state-of-the-art point-based methods, we propose to learn point features on a body surface.
Our approach is validated on two existing datasets and our newly introduced dataset, showing better clothing deformation results in unseen poses.
- Score: 36.39531876183322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating animatable avatars from static scans requires the modeling of
clothing deformations in different poses. Existing learning-based methods
typically add pose-dependent deformations upon a minimally-clothed mesh
template or a learned implicit template, which have limitations in capturing
details or hinder end-to-end learning. In this paper, we revisit point-based
solutions and propose to decompose explicit garment-related templates and then
add pose-dependent wrinkles to them. In this way, the clothing deformations are
disentangled such that the pose-dependent wrinkles can be better learned and
applied to unseen poses. Additionally, to tackle the seam artifact issues in
recent state-of-the-art point-based methods, we propose to learn point features
on a body surface, which establishes a continuous and compact feature space to
capture the fine-grained and pose-dependent clothing geometry. To facilitate
the research in this field, we also introduce a high-quality scan dataset of
humans in real-world clothing. Our approach is validated on two existing
datasets and our newly introduced dataset, showing better clothing deformation
results in unseen poses. The project page with code and dataset can be found at
https://www.liuyebin.com/closet.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Learning Implicit Templates for Point-Based Clothed Human Modeling [33.6247548142638]
We present FITE, a framework for modeling human avatars in clothing.
Our framework first learns implicit surface templates representing the coarse clothing topology.
We employ the templates to guide the generation of point sets which further capture pose-dependent clothing deformations such as wrinkles.
arXiv Detail & Related papers (2022-07-14T14:25:36Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.