PhysXNet: A Customizable Approach for LearningCloth Dynamics on Dressed
People
- URL: http://arxiv.org/abs/2111.07195v1
- Date: Sat, 13 Nov 2021 21:05:41 GMT
- Title: PhysXNet: A Customizable Approach for LearningCloth Dynamics on Dressed
People
- Authors: Jordi Sanchez-Riera, Albert Pumarola and Francesc Moreno-Noguer
- Abstract summary: We introduce PhysXNet, a learning-based approach to predict the dynamics of deformable clothes given 3D skeleton motion sequences of humans wearing these clothes.
PhysXNet is able to estimate the geometry of dense cloth meshes in a matter of milliseconds.
A thorough evaluation demonstrates that PhysXNet delivers cloth deformations very close to those computed with the physical engine.
- Score: 38.23532960427364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce PhysXNet, a learning-based approach to predict the dynamics of
deformable clothes given 3D skeleton motion sequences of humans wearing these
clothes. The proposed model is adaptable to a large variety of garments and
changing topologies, without need of being retrained. Such simulations are
typically carried out by physics engines that require manual human expertise
and are subjectto computationally intensive computations. PhysXNet, by
contrast, is a fully differentiable deep network that at inference is able to
estimate the geometry of dense cloth meshes in a matter of milliseconds, and
thus, can be readily deployed as a layer of a larger deep learning
architecture. This efficiency is achieved thanks to the specific
parameterization of the clothes we consider, based on 3D UV maps encoding
spatial garment displacements. The problem is then formulated as a mapping
between the human kinematics space (represented also by 3D UV maps of the
undressed body mesh) into the clothes displacement UV maps, which we learn
using a conditional GAN with a discriminator that enforces feasible
deformations. We train simultaneously our model for three garment templates,
tops, bottoms and dresses for which we simulate deformations under 50 different
human actions. Nevertheless, the UV map representation we consider allows
encapsulating many different cloth topologies, and at test we can simulate
garments even if we did not specifically train for them. A thorough evaluation
demonstrates that PhysXNet delivers cloth deformations very close to those
computed with the physical engine, opening the door to be effectively
integrated within deeplearning pipelines.
Related papers
- PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations [62.14943588289551]
We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human.
PhysAvatar reconstructs avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data.
arXiv Detail & Related papers (2024-04-05T21:44:57Z) - A Physics-embedded Deep Learning Framework for Cloth Simulation [6.8806198396336935]
This paper proposes a physics-embedded learning framework that directly encodes physical features of cloth simulation.
The framework can also integrate with other external forces and collision handling through either traditional simulators or sub neural networks.
arXiv Detail & Related papers (2024-03-19T15:21:00Z) - Towards Multi-Layered 3D Garments Animation [135.77656965678196]
Existing approaches mostly focus on single-layered garments driven by only human bodies and struggle to handle general scenarios.
We propose a novel data-driven method, called LayersNet, to model garment-level animations as particle-wise interactions in a micro physics system.
Our experiments show that LayersNet achieves superior performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-05-17T17:53:04Z) - 3D-IntPhys: Towards More Generalized 3D-grounded Visual Intuitive
Physics under Challenging Scenes [68.66237114509264]
We present a framework capable of learning 3D-grounded visual intuitive physics models from videos of complex scenes with fluids.
We show our model can make long-horizon future predictions by learning from raw images and significantly outperforms models that do not employ an explicit 3D representation space.
arXiv Detail & Related papers (2023-04-22T19:28:49Z) - Dynamic Visual Reasoning by Learning Differentiable Physics Models from
Video and Language [92.7638697243969]
We propose a unified framework that can jointly learn visual concepts and infer physics models of objects from videos and language.
This is achieved by seamlessly integrating three components: a visual perception module, a concept learner, and a differentiable physics engine.
arXiv Detail & Related papers (2021-10-28T17:59:13Z) - Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On [9.293488420613148]
We present a learning-based approach for virtual try-on applications based on a fully convolutional graph neural network.
In contrast to existing data-driven models, which are trained for a specific garment or mesh topology, our fully convolutional model can cope with a large family of garments.
Under the hood, our novel geometric deep learning approach learns to drape 3D garments by decoupling the three different sources of deformations that condition the fit of clothing.
arXiv Detail & Related papers (2020-09-09T22:38:03Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.