DrapeNet: Garment Generation and Self-Supervised Draping
- URL: http://arxiv.org/abs/2211.11277v3
- Date: Wed, 22 Mar 2023 14:20:14 GMT
- Title: DrapeNet: Garment Generation and Self-Supervised Draping
- Authors: Luca De Luigi and Ren Li and Beno\^it Guillard and Mathieu Salzmann
and Pascal Fua
- Abstract summary: We rely on self-supervision to train a single network to drape multiple garments.
This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network.
Our pipeline can generate and drape previously unseen garments of any topology.
- Score: 95.0315186890655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent approaches to drape garments quickly over arbitrary human bodies
leverage self-supervision to eliminate the need for large training sets.
However, they are designed to train one network per clothing item, which
severely limits their generalization abilities. In our work, we rely on
self-supervision to train a single network to drape multiple garments. This is
achieved by predicting a 3D deformation field conditioned on the latent codes
of a generative network, which models garments as unsigned distance fields. Our
pipeline can generate and drape previously unseen garments of any topology,
whose shape can be edited by manipulating their latent codes. Being fully
differentiable, our formulation makes it possible to recover accurate 3D models
of garments from partial observations -- images or 3D scans -- via gradient
descent. Our code is publicly available at
https://github.com/liren2515/DrapeNet .
Related papers
- Garment3DGen: 3D Garment Stylization and Texture Generation [11.836357439129301]
Garment3DGen is a new method to synthesize 3D garment assets from a base mesh given a single input image as guidance.
We leverage the recent progress of image-to-3D diffusion methods to generate 3D garment geometries.
We generate high-fidelity texture maps that are globally and locally consistent and faithfully capture the input guidance.
arXiv Detail & Related papers (2024-03-27T17:59:33Z) - Layered 3D Human Generation via Semantic-Aware Diffusion Model [63.459666003261276]
We propose a text-driven layered 3D human generation framework based on a novel semantic-aware diffusion model.
To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing.
To match the clothing with different body shapes, we propose a SMPL-driven implicit field deformation network.
arXiv Detail & Related papers (2023-12-10T07:34:43Z) - A Generative Multi-Resolution Pyramid and Normal-Conditioning 3D Cloth
Draping [37.77353302404437]
We build a conditional variational autoencoder for 3D garment generation and draping.
We propose a pyramid network to add garment details progressively in a canonical space.
Our results on two public datasets, CLOTH3D and CAPE, show that our model is robust, controllable in terms of detail generation.
arXiv Detail & Related papers (2023-11-05T16:12:48Z) - Garment4D: Garment Reconstruction from Point Cloud Sequences [12.86951061306046]
Learning to reconstruct 3D garments is important for dressing 3D human bodies of different shapes in different poses.
Previous works typically rely on 2D images as input, which however suffer from the scale and pose ambiguities.
We propose a principled framework, Garment4D, that uses 3D point cloud sequences of dressed humans for garment reconstruction.
arXiv Detail & Related papers (2021-12-08T08:15:20Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - BCNet: Learning Body and Cloth Shape from A Single Image [56.486796244320125]
We propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh.
Compared with existing methods, our method can support more garment categories and recover more accurate geometry.
arXiv Detail & Related papers (2020-04-01T03:41:36Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.