gradSim: Differentiable simulation for system identification and
visuomotor control
- URL: http://arxiv.org/abs/2104.02646v1
- Date: Tue, 6 Apr 2021 16:32:01 GMT
- Title: gradSim: Differentiable simulation for system identification and
visuomotor control
- Authors: Krishna Murthy Jatavallabhula and Miles Macklin and Florian Golemo and
Vikram Voleti and Linda Petrini and Martin Weiss and Breandan Considine and
Jerome Parent-Levesque and Kevin Xie and Kenny Erleben and Liam Paull and
Florian Shkurti and Derek Nowrouzezahrai and Sanja Fidler
- Abstract summary: We present gradSim, a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering.
Our unified graph enables learning in challenging visuomotor control tasks, without relying on state-based (3D) supervision.
- Score: 66.37288629125996
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the problem of estimating an object's physical properties such as
mass, friction, and elasticity directly from video sequences. Such a system
identification problem is fundamentally ill-posed due to the loss of
information during image formation. Current solutions require precise 3D labels
which are labor-intensive to gather, and infeasible to create for many systems
such as deformable solids or cloth. We present gradSim, a framework that
overcomes the dependence on 3D supervision by leveraging differentiable
multiphysics simulation and differentiable rendering to jointly model the
evolution of scene dynamics and image formation. This novel combination enables
backpropagation from pixels in a video sequence through to the underlying
physical attributes that generated them. Moreover, our unified computation
graph -- spanning from the dynamics and through the rendering process --
enables learning in challenging visuomotor control tasks, without relying on
state-based (3D) supervision, while obtaining performance competitive to or
better than techniques that rely on precise 3D labels.
Related papers
- DEL: Discrete Element Learner for Learning 3D Particle Dynamics with Neural Rendering [10.456618054473177]
We show how to learn 3D dynamics from 2D images by inverse rendering.
We incorporate the learnable graph kernels into the classic Discrete Element Analysis framework.
Our methods can effectively learn the dynamics of various materials from the partial 2D observations.
arXiv Detail & Related papers (2024-10-11T16:57:02Z) - MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion [118.74385965694694]
We present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes.
By simply estimating a pointmap for each timestep, we can effectively adapt DUST3R's representation, previously only used for static scenes, to dynamic scenes.
We show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics.
arXiv Detail & Related papers (2024-10-04T18:00:07Z) - Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication [50.541882834405946]
We introduce Atlas3D, an automatic and easy-to-implement text-to-3D method.
Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization.
We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments.
arXiv Detail & Related papers (2024-05-28T18:33:18Z) - RISP: Rendering-Invariant State Predictor with Differentiable Simulation
and Rendering for Cross-Domain Parameter Estimation [110.4255414234771]
Existing solutions require massive training data or lack generalizability to unknown rendering configurations.
We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem.
Our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.
arXiv Detail & Related papers (2022-05-11T17:59:51Z) - SNUG: Self-Supervised Neural Dynamic Garments [14.83072352654608]
We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies.
This allows us to learn models for interactive garments, including dynamic deformations and fine wrinkles, with two orders of magnitude speed up in training time.
arXiv Detail & Related papers (2022-04-05T13:50:21Z) - 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch
Feature Swapping for Bodies and Faces [12.114711258010367]
We propose a self-supervised approach to train a 3D shape variational autoencoder which encourages a disentangled latent representation of identity features.
Experimental results conducted on 3D meshes show that state-of-the-art methods for latent disentanglement are not able to disentangle identity features of faces and bodies.
arXiv Detail & Related papers (2021-11-24T11:53:33Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Learning to Simulate Complex Physics with Graph Networks [68.43901833812448]
We present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains.
Our framework---which we term "Graph Network-based Simulators" (GNS)--represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.
Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time.
arXiv Detail & Related papers (2020-02-21T16:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.