Learning Implicit Templates for Point-Based Clothed Human Modeling
- URL: http://arxiv.org/abs/2207.06955v1
- Date: Thu, 14 Jul 2022 14:25:36 GMT
- Title: Learning Implicit Templates for Point-Based Clothed Human Modeling
- Authors: Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao and Yebin Liu
- Abstract summary: We present FITE, a framework for modeling human avatars in clothing.
Our framework first learns implicit surface templates representing the coarse clothing topology.
We employ the templates to guide the generation of point sets which further capture pose-dependent clothing deformations such as wrinkles.
- Score: 33.6247548142638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present FITE, a First-Implicit-Then-Explicit framework for modeling human
avatars in clothing. Our framework first learns implicit surface templates
representing the coarse clothing topology, and then employs the templates to
guide the generation of point sets which further capture pose-dependent
clothing deformations such as wrinkles. Our pipeline incorporates the merits of
both implicit and explicit representations, namely, the ability to handle
varying topology and the ability to efficiently capture fine details. We also
propose diffused skinning to facilitate template training especially for loose
clothing, and projection-based pose-encoding to extract pose information from
mesh templates without predefined UV map or connectivity. Our code is publicly
available at https://github.com/jsnln/fite.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - FreeSeg-Diff: Training-Free Open-Vocabulary Segmentation with Diffusion Models [56.71672127740099]
We focus on the task of image segmentation, which is traditionally solved by training models on closed-vocabulary datasets.
We leverage different and relatively small-sized, open-source foundation models for zero-shot open-vocabulary segmentation.
Our approach (dubbed FreeSeg-Diff), which does not rely on any training, outperforms many training-based approaches on both Pascal VOC and COCO datasets.
arXiv Detail & Related papers (2024-03-29T10:38:25Z) - A Two-stage Personalized Virtual Try-on Framework with Shape Control and
Texture Guidance [7.302929117437442]
This paper proposes a brand new personalized virtual try-on model (PE-VITON), which uses the two stages (shape control and texture guidance) to decouple the clothing attributes.
The proposed model can effectively solve the problems of weak reduction of clothing folds, poor generation effect under complex human posture, blurred edges of clothing, and unclear texture styles in traditional try-on methods.
arXiv Detail & Related papers (2023-12-24T13:32:55Z) - StableVITON: Learning Semantic Correspondence with Latent Diffusion
Model for Virtual Try-On [35.227896906556026]
Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image.
In this work, we aim to expand the applicability of the pre-trained diffusion model so that it can be utilized independently for the virtual try-on task.
Our proposed zero cross-attention blocks not only preserve the clothing details by learning the semantic correspondence but also generate high-fidelity images by utilizing the inherent knowledge of the pre-trained model in the warping process.
arXiv Detail & Related papers (2023-12-04T08:27:59Z) - CloSET: Modeling Clothed Humans on Continuous Surface with Explicit
Template Decomposition [36.39531876183322]
We propose to decompose explicit garment-related templates and then add pose-dependent wrinkles to them.
To tackle the seam artifact issues in recent state-of-the-art point-based methods, we propose to learn point features on a body surface.
Our approach is validated on two existing datasets and our newly introduced dataset, showing better clothing deformation results in unseen poses.
arXiv Detail & Related papers (2023-04-06T15:50:05Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Learning Generative Models of Textured 3D Meshes from Real-World Images [26.353307246909417]
We propose a GAN framework for generating textured triangle meshes without relying on such annotations.
We show that the performance of our approach is on par with prior work that relies on ground-truth keypoints.
arXiv Detail & Related papers (2021-03-29T14:07:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.