Self-Supervised Collision Handling via Generative 3D Garment Models for
Virtual Try-On
- URL: http://arxiv.org/abs/2105.06462v1
- Date: Thu, 13 May 2021 17:58:20 GMT
- Title: Self-Supervised Collision Handling via Generative 3D Garment Models for
Virtual Try-On
- Authors: Igor Santesteban, Nils Thuerey, Miguel A. Otaduy, Dan Casas
- Abstract summary: We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on.
We show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.
- Score: 29.458328272854107
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a new generative model for 3D garment deformations that enables us
to learn, for the first time, a data-driven method for virtual try-on that
effectively addresses garment-body collisions. In contrast to existing methods
that require an undesirable postprocessing step to fix garment-body
interpenetrations at test time, our approach directly outputs 3D garment
configurations that do not collide with the underlying body. Key to our success
is a new canonical space for garments that removes pose-and-shape deformations
already captured by a new diffused human body model, which extrapolates body
surface properties such as skinning weights and blendshapes to any 3D point. We
leverage this representation to train a generative model with a novel
self-supervised collision term that learns to reliably solve garment-body
interpenetrations. We extensively evaluate and compare our results with
recently proposed data-driven methods, and show that our method is the first to
successfully address garment-body contact in unseen body shapes and motions,
without compromising realism and detail.
Related papers
- Dress-Me-Up: A Dataset & Method for Self-Supervised 3D Garment
Retargeting [28.892029042436626]
We propose a novel framework for non-parametricized 3D garments onto 3D human avatars of arbitrary shapes and poses.
Existing self-supervised 3D methods only support parametric and canonical garments.
We show superior quality on non-supervised garments and human avatars over existing state-of-the-art methods.
arXiv Detail & Related papers (2024-01-06T02:28:25Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On [9.293488420613148]
We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network.
In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments.
Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing.
arXiv Detail & Related papers (2020-09-09T22:38:03Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.