SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size
Sensitive 3D Clothing
- URL: http://arxiv.org/abs/2007.11610v1
- Date: Wed, 22 Jul 2020 18:13:24 GMT
- Title: SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size
Sensitive 3D Clothing
- Authors: Garvita Tiwari, Bharat Lal Bhatnagar, Tony Tung, Gerard Pons-Moll
- Abstract summary: We introduce SizerNet to predict 3D clothing conditioned on human body shape and garment size parameters.
We also introduce Net to infer garment meshes and shape under clothing with personal details in a single pass from an input mesh.
- Score: 50.63492603374867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While models of 3D clothing learned from real data exist, no method can
predict clothing deformation as a function of garment size. In this paper, we
introduce SizerNet to predict 3D clothing conditioned on human body shape and
garment size parameters, and ParserNet to infer garment meshes and shape under
clothing with personal details in a single pass from an input mesh. SizerNet
allows to estimate and visualize the dressing effect of a garment in various
sizes, and ParserNet allows to edit clothing of an input mesh directly,
removing the need for scan segmentation, which is a challenging problem in
itself. To learn these models, we introduce the SIZER dataset of clothing size
variation which includes $100$ different subjects wearing casual clothing items
in various sizes, totaling to approximately 2000 scans. This dataset includes
the scans, registrations to the SMPL model, scans segmented in clothing parts,
garment category and size labels. Our experiments show better parsing accuracy
and size prediction than baseline methods trained on SIZER. The code, model and
dataset will be released for research purposes.
Related papers
- GarmentCodeData: A Dataset of 3D Made-to-Measure Garments With Sewing Patterns [18.513707884523072]
We present the first large-scale synthetic dataset of 3D made-to-measure garments with sewing patterns.
GarmentCodeData contains 115,000 data points that cover a variety of designs in many common garment categories.
We propose an automatic, open-source 3D garment draping pipeline based on a fast XPBD simulator.
arXiv Detail & Related papers (2024-05-27T19:14:46Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Shape Controllable Virtual Try-on for Underwear Models [0.0]
We propose a Shape Controllable Virtual Try-On Network (SC-VTON) to dress clothing for underwear models.
SC-VTON integrates information of model and clothing to generate warped clothing image.
Our method can generate high-resolution results with detailed textures.
arXiv Detail & Related papers (2021-07-28T04:01:01Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.