Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On
- URL: http://arxiv.org/abs/2009.04592v1
- Date: Wed, 9 Sep 2020 22:38:03 GMT
- Title: Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On
- Authors: Raquel Vidaurre, Igor Santesteban, Elena Garces, Dan Casas
- Abstract summary: We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network.
In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments.
Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing.
- Score: 9.293488420613148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a learning-based approach for virtual try-on applications based on
a fully convolutional graph neural network. In contrast to existing data-driven
models, which are trained for a specific garment or mesh topology, our fully
convolutional model can cope with a large family of garments, represented as
parametric predefined 2D panels with arbitrary mesh topology, including long
dresses, shirts, and tight tops. Under the hood, our novel geometric deep
learning approach learns to drape 3D garments by decoupling the three different
sources of deformations that condition the fit of clothing: garment type,
target body shape, and material. Specifically, we first learn a regressor that
predicts the 3D drape of the input parametric garment when worn by a mean body
shape. Then, after a mesh topology optimization step where we generate a
sufficient level of detail for the input garment type, we further deform the
mesh to reproduce deformations caused by the target body shape. Finally, we
predict fine-scale details such as wrinkles that depend mostly on the garment
material. We qualitatively and quantitatively demonstrate that our fully
convolutional approach outperforms existing methods in terms of generalization
capabilities and memory requirements, and therefore it opens the door to more
general learning-based models for virtual try-on applications.
Related papers
- Neural Capture of Animatable 3D Human from Monocular Video [38.974181971541846]
We present a novel paradigm of building an animatable 3D human representation from a monocular video input, such that it can be rendered in any unseen poses and views.
Our method is based on a dynamic Neural Radiance Field (NeRF) rigged by a mesh-based parametric 3D human model serving as a geometry proxy.
arXiv Detail & Related papers (2022-08-18T09:20:48Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - Self-Supervised Collision Handling via Generative 3D Garment Models for
Virtual Try-On [29.458328272854107]
We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on.
We show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.
arXiv Detail & Related papers (2021-05-13T17:58:20Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local
Elements [62.652588951757764]
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies.
Recent work uses neural networks to parameterize local surface elements.
We present three key innovations: First, we deform surface elements based on a human body model.
Second, we address the limitations of existing neural surface elements by regressing local geometry from local features.
arXiv Detail & Related papers (2021-04-15T17:59:39Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.