Learning Visible Connectivity Dynamics for Cloth Smoothing
- URL: http://arxiv.org/abs/2105.10389v1
- Date: Fri, 21 May 2021 15:03:29 GMT
- Title: Learning Visible Connectivity Dynamics for Cloth Smoothing
- Authors: Xingyu Lin, Yufei Wang, David Held
- Abstract summary: We propose to learn a particle-based dynamics model from a partial point cloud observation.
To overcome the challenges of partial observability, we infer which visible points are connected on the underlying cloth mesh.
We show that our method greatly outperforms previous state-of-the-art model-based and model-free reinforcement learning methods in simulation.
- Score: 17.24004979796887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotic manipulation of cloth remains challenging for robotics due to the
complex dynamics of the cloth, lack of a low-dimensional state representation,
and self-occlusions. In contrast to previous model-based approaches that learn
a pixel-based dynamics model or a compressed latent vector dynamics, we propose
to learn a particle-based dynamics model from a partial point cloud
observation. To overcome the challenges of partial observability, we infer
which visible points are connected on the underlying cloth mesh. We then learn
a dynamics model over this visible connectivity graph. Compared to previous
learning-based approaches, our model poses strong inductive bias with its
particle based representation for learning the underlying cloth physics; it is
invariant to visual features; and the predictions can be more easily
visualized. We show that our method greatly outperforms previous
state-of-the-art model-based and model-free reinforcement learning methods in
simulation. Furthermore, we demonstrate zero-shot sim-to-real transfer where we
deploy the model trained in simulation on a Franka arm and show that the model
can successfully smooth different types of cloth from crumpled configurations.
Videos can be found on our project website.
Related papers
- Learning Low-Dimensional Strain Models of Soft Robots by Looking at the Evolution of Their Shape with Application to Model-Based Control [2.058941610795796]
This paper introduces a streamlined method for learning low-dimensional, physics-based models.
We validate our approach through simulations with various planar soft manipulators.
Thanks to the capability of the method of generating physically compatible models, the learned models can be straightforwardly combined with model-based control policies.
arXiv Detail & Related papers (2024-10-31T18:37:22Z) - Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video [58.043569985784806]
We introduce latent intuitive physics, a transfer learning framework for physics simulation.
It can infer hidden properties of fluids from a single 3D video and simulate the observed fluid in novel scenes.
We validate our model in three ways: (i) novel scene simulation with the learned visual-world physics, (ii) future prediction of the observed fluid dynamics, and (iii) supervised particle simulation.
arXiv Detail & Related papers (2024-06-18T16:37:44Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - Which priors matter? Benchmarking models for learning latent dynamics [70.88999063639146]
Several methods have proposed to integrate priors from classical mechanics into machine learning models.
We take a sober look at the current capabilities of these models.
We find that the use of continuous and time-reversible dynamics benefits models of all classes.
arXiv Detail & Related papers (2021-11-09T23:48:21Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Model-Based Inverse Reinforcement Learning from Visual Demonstrations [20.23223474119314]
We present a gradient-based inverse reinforcement learning framework that learns cost functions when given only visual human demonstrations.
The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control.
We evaluate our framework on hardware on two basic object manipulation tasks.
arXiv Detail & Related papers (2020-10-18T17:07:53Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.