A Differentiable Contact Model to Extend Lagrangian and Hamiltonian
Neural Networks for Modeling Hybrid Dynamics
- URL: http://arxiv.org/abs/2102.06794v1
- Date: Fri, 12 Feb 2021 22:02:41 GMT
- Title: A Differentiable Contact Model to Extend Lagrangian and Hamiltonian
Neural Networks for Modeling Hybrid Dynamics
- Authors: Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
- Abstract summary: We introduce a differentiable contact model, which can capture contact mechanics, both frictionless and frictional, as well as both elastic and inelastic.
We demonstrate this framework on a series of challenging 2D and 3D physical systems with different coefficients of restitution and friction.
- Score: 10.019335078365705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The incorporation of appropriate inductive bias plays a critical role in
learning dynamics from data. A growing body of work has been exploring ways to
enforce energy conservation in the learned dynamics by incorporating Lagrangian
or Hamiltonian dynamics into the design of the neural network architecture.
However, these existing approaches are based on differential equations, which
does not allow discontinuity in the states, and thereby limits the class of
systems one can learn. Real systems, such as legged robots and robotic
manipulators, involve contacts and collisions, which introduce discontinuities
in the states. In this paper, we introduce a differentiable contact model,
which can capture contact mechanics, both frictionless and frictional, as well
as both elastic and inelastic. This model can also accommodate inequality
constraints, such as limits on the joint angles. The proposed contact model
extends the scope of Lagrangian and Hamiltonian neural networks by allowing
simultaneous learning of contact properties and system properties. We
demonstrate this framework on a series of challenging 2D and 3D physical
systems with different coefficients of restitution and friction.
Related papers
- ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence [1.1720409777196028]
Neural ODEs (NODEs) are continuous-time neural networks (NNs) that can process data without the limitation of time intervals.
We show that despite their highly nonlinear nature, convergence can be guaranteed via tractable linear inequalities.
In the composition of CSODEs, we introduce an extra control term for learning the potential simultaneous capture of dynamics at different scales.
arXiv Detail & Related papers (2024-11-04T17:20:42Z) - Poisson-Dirac Neural Networks for Modeling Coupled Dynamical Systems across Domains [13.499500088995463]
We propose a novel framework based on the Dirac structure that unifies the port-Hamiltonian and Poisson formulations from geometric mechanics.
PoDiNNs offer improved accuracy and interpretability in modeling unknown coupled dynamical systems from data.
arXiv Detail & Related papers (2024-10-15T10:31:22Z) - Symmetry-regularized neural ordinary differential equations [0.0]
This paper introduces new conservation relations in Neural ODEs using Lie symmetries in both the hidden state dynamics and the back propagation dynamics.
These conservation laws are then incorporated into the loss function as additional regularization terms, potentially enhancing the physical interpretability and generalizability of the model.
New loss functions are constructed from these conservation relations, demonstrating the applicability symmetry-regularized Neural ODE in typical modeling tasks.
arXiv Detail & Related papers (2023-11-28T09:27:44Z) - Inferring Relational Potentials in Interacting Systems [56.498417950856904]
We propose Neural Interaction Inference with Potentials (NIIP) as an alternative approach to discover such interactions.
NIIP assigns low energy to the subset of trajectories which respect the relational constraints observed.
It allows trajectory manipulation, such as interchanging interaction types across separately trained models, as well as trajectory forecasting.
arXiv Detail & Related papers (2023-10-23T00:44:17Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - Physics Informed RNN-DCT Networks for Time-Dependent Partial
Differential Equations [62.81701992551728]
We present a physics-informed framework for solving time-dependent partial differential equations.
Our model utilizes discrete cosine transforms to encode spatial and recurrent neural networks.
We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations.
arXiv Detail & Related papers (2022-02-24T20:46:52Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Lagrangian Neural Network with Differential Symmetries and Relational
Inductive Bias [5.017136256232997]
We present a momentum conserving Lagrangian neural network (MCLNN) that learns the Lagrangian of a system.
We also show that the model developed can generalize to systems of any arbitrary size.
arXiv Detail & Related papers (2021-10-07T08:49:57Z) - ContactNets: Learning Discontinuous Contact Dynamics with Smooth,
Implicit Representations [4.8986598953553555]
Our method learns parameterizations of inter-body signed distance and contact-frame Jacobians.
Our method can predict realistic impact, non-penetration, and stiction when trained on 60 seconds of real-world data.
arXiv Detail & Related papers (2020-09-23T14:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.