TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape
and Garment Style
- URL: http://arxiv.org/abs/2003.04583v2
- Date: Sun, 15 Mar 2020 16:35:56 GMT
- Title: TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape
and Garment Style
- Authors: Chaitanya Patel, Zhouyingcheng Liao, Gerard Pons-Moll
- Abstract summary: We present TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style.
Our hypothesis is that (even non-linear) combinations of examples smooth out high frequency components such as fine-wrinkles.
Several experiments demonstrate TailorNet produces more realistic results than prior work, and even generates temporally coherent deformations.
- Score: 43.99803542307155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present TailorNet, a neural model which predicts clothing
deformation in 3D as a function of three factors: pose, shape and style
(garment geometry), while retaining wrinkle detail. This goes beyond prior
models, which are either specific to one style and shape, or generalize to
different shapes producing smooth results, despite being style specific. Our
hypothesis is that (even non-linear) combinations of examples smooth out high
frequency components such as fine-wrinkles, which makes learning the three
factors jointly hard. At the heart of our technique is a decomposition of
deformation into a high frequency and a low frequency component. While the
low-frequency component is predicted from pose, shape and style parameters with
an MLP, the high-frequency component is predicted with a mixture of shape-style
specific pose models. The weights of the mixture are computed with a narrow
bandwidth kernel to guarantee that only predictions with similar high-frequency
patterns are combined. The style variation is obtained by computing, in a
canonical pose, a subspace of deformation, which satisfies physical constraints
such as inter-penetration, and draping on the body. TailorNet delivers 3D
garments which retain the wrinkles from the physics based simulations (PBS) it
is learned from, while running more than 1000 times faster. In contrast to PBS,
TailorNet is easy to use and fully differentiable, which is crucial for
computer vision algorithms. Several experiments demonstrate TailorNet produces
more realistic results than prior work, and even generates temporally coherent
deformations on sequences of the AMASS dataset, despite being trained on static
poses from a different dataset. To stimulate further research in this
direction, we will make a dataset consisting of 55800 frames, as well as our
model publicly available at https://virtualhumans.mpi-inf.mpg.de/tailornet.
Related papers
- Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - 3D Human Pose Regression using Graph Convolutional Network [68.8204255655161]
We propose a graph convolutional network named PoseGraphNet for 3D human pose regression from 2D poses.
Our model's performance is close to the state-of-the-art, but with much fewer parameters.
arXiv Detail & Related papers (2021-05-21T14:41:31Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On [9.293488420613148]
We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network.
In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments.
Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing.
arXiv Detail & Related papers (2020-09-09T22:38:03Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - GarNet++: Improving Fast and Accurate Static3D Cloth Draping by
Curvature Loss [89.96698250086064]
We introduce a two-stream deep network model that produces a visually plausible draping of a template cloth on virtual 3D bodies.
Our network learns to mimic a Physics-Based Simulation (PBS) method while requiring two orders of magnitude less computation time.
We validate our framework on four garment types for various body shapes and poses.
arXiv Detail & Related papers (2020-07-20T13:40:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.