SNUG: Self-Supervised Neural Dynamic Garments
- URL: http://arxiv.org/abs/2204.02219v1
- Date: Tue, 5 Apr 2022 13:50:21 GMT
- Title: SNUG: Self-Supervised Neural Dynamic Garments
- Authors: Igor Santesteban and Miguel A. Otaduy and Dan Casas
- Abstract summary: We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies.
This allows us to learn models for interactive garments, including dynamic deformations and fine wrinkles, with two orders of magnitude speed up in training time.
- Score: 14.83072352654608
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a self-supervised method to learn dynamic 3D deformations of
garments worn by parametric human bodies. State-of-the-art data-driven
approaches to model 3D garment deformations are trained using supervised
strategies that require large datasets, usually obtained by expensive
physics-based simulation methods or professional multi-camera capture setups.
In contrast, we propose a new training scheme that removes the need for
ground-truth samples, enabling self-supervised training of dynamic 3D garment
deformations. Our key contribution is to realize that physics-based deformation
models, traditionally solved in a frame-by-frame basis by implicit integrators,
can be recasted as an optimization problem. We leverage such optimization-based
scheme to formulate a set of physics-based loss terms that can be used to train
neural networks without precomputing ground-truth data. This allows us to learn
models for interactive garments, including dynamic deformations and fine
wrinkles, with two orders of magnitude speed up in training time compared to
state-of-the-art supervised methods
Related papers
- Learning Low-Dimensional Strain Models of Soft Robots by Looking at the Evolution of Their Shape with Application to Model-Based Control [2.058941610795796]
This paper introduces a streamlined method for learning low-dimensional, physics-based models.
We validate our approach through simulations with various planar soft manipulators.
Thanks to the capability of the method of generating physically compatible models, the learned models can be straightforwardly combined with model-based control policies.
arXiv Detail & Related papers (2024-10-31T18:37:22Z) - TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene [25.164085646259856]
This paper introduces a 3D semantic NeRF for dynamic scenes captured from sparse or singleview RGB videos.
Our framework uses an Invertible Neural Network (INN) for LBS prediction, the training process.
Our approach produces high-quality reconstructions of both deformable and non-deformable objects in complex interactions.
arXiv Detail & Related papers (2024-09-26T01:34:42Z) - Robust Category-Level 3D Pose Estimation from Synthetic Data [17.247607850702558]
We introduce SyntheticP3D, a new synthetic dataset for object pose estimation generated from CAD models.
We propose a novel approach (CC3D) for training neural mesh models that perform pose estimation via inverse rendering.
arXiv Detail & Related papers (2023-05-25T14:56:03Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Gradient-Based Trajectory Optimization With Learned Dynamics [80.41791191022139]
We use machine learning techniques to learn a differentiable dynamics model of the system from data.
We show that a neural network can model highly nonlinear behaviors accurately for large time horizons.
In our hardware experiments, we demonstrate that our learned model can represent complex dynamics for both the Spot and Radio-controlled (RC) car.
arXiv Detail & Related papers (2022-04-09T22:07:34Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Physics-Informed Neural State Space Models via Learning and Evolution [1.1086440815804224]
We study methods for discovering neural state space dynamics models for system identification.
We employ an asynchronous genetic search algorithm that alternates between model selection and optimization.
arXiv Detail & Related papers (2020-11-26T23:35:08Z) - SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for
Parametric Humans [15.83525220631304]
We present SoftSMPL, a learning-based method to model realistic soft-tissue dynamics as a function of body shape and motion.
At the core of our method there are three key contributions that enable us to model highly realistic dynamics.
arXiv Detail & Related papers (2020-04-01T10:35:06Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.