Learning Rigid Body Dynamics with Lagrangian Graph Neural Network
- URL: http://arxiv.org/abs/2209.11588v1
- Date: Fri, 23 Sep 2022 13:41:54 GMT
- Title: Learning Rigid Body Dynamics with Lagrangian Graph Neural Network
- Authors: Ravinder Bhattoo, Sayan Ranu, N. M. Anoop Krishnan
- Abstract summary: We present a Lagrangian graph neural network (LGNN) that can learn the dynamics of rigid bodies by exploiting their topology.
We show that the LGNN can be used to model the dynamics of complex real-world structures such as the stability of tensegrity structures.
- Score: 5.560715621814096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lagrangian and Hamiltonian neural networks (LNN and HNN respectively) encode
strong inductive biases that allow them to outperform other models of physical
systems significantly. However, these models have, thus far, mostly been
limited to simple systems such as pendulums and springs or a single rigid body
such as a gyroscope or a rigid rotor. Here, we present a Lagrangian graph
neural network (LGNN) that can learn the dynamics of rigid bodies by exploiting
their topology. We demonstrate the performance of LGNN by learning the dynamics
of ropes, chains, and trusses with the bars modeled as rigid bodies. LGNN also
exhibits generalizability -- LGNN trained on chains with a few segments
exhibits generalizability to simulate a chain with large number of links and
arbitrary link length. We also show that the LGNN can simulate unseen hybrid
systems including bars and chains, on which they have not been trained on.
Specifically, we show that the LGNN can be used to model the dynamics of
complex real-world structures such as the stability of tensegrity structures.
Finally, we discuss the non-diagonal nature of the mass matrix and it's ability
to generalize in complex systems.
Related papers
- ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence [1.1720409777196028]
Neural ODEs (NODEs) are continuous-time neural networks (NNs) that can process data without the limitation of time intervals.
We show that despite their highly nonlinear nature, convergence can be guaranteed via tractable linear inequalities.
In the composition of CSODEs, we introduce an extra control term for learning the potential simultaneous capture of dynamics at different scales.
arXiv Detail & Related papers (2024-11-04T17:20:42Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - Discovering Symbolic Laws Directly from Trajectories with Hamiltonian
Graph Neural Networks [5.824034325431987]
We present a Hamiltonian graph neural network (HGNN) that learns the dynamics of systems directly from their trajectory.
We demonstrate the performance of HGNN on n-springs, n-pendulums, gravitational systems, and binary Lennard Jones systems.
arXiv Detail & Related papers (2023-07-11T14:43:25Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems [5.787429262238507]
We evaluate the performance of graph neural networks (GNNs) and their variants with explicit constraints and different architectures.
Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance.
All the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
arXiv Detail & Related papers (2022-11-10T12:29:30Z) - Learning the Dynamics of Particle-based Systems with Lagrangian Graph
Neural Networks [5.560715621814096]
We present a framework, namely, Lagrangian graph neural network (LGnn), that provides a strong inductive bias to learn the Lagrangian of a particle-based system directly from the trajectory.
We show the zero-shot generalizability of the system by simulating systems two orders of magnitude larger than the trained one and also hybrid systems that are unseen by the model.
arXiv Detail & Related papers (2022-09-03T18:38:17Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Lagrangian Neural Network with Differential Symmetries and Relational
Inductive Bias [5.017136256232997]
We present a momentum conserving Lagrangian neural network (MCLNN) that learns the Lagrangian of a system.
We also show that the model developed can generalize to systems of any arbitrary size.
arXiv Detail & Related papers (2021-10-07T08:49:57Z) - Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit
Constraints [49.66841118264278]
We introduce a series of challenging chaotic and extended-body systems to push the limits of current approaches.
Our experiments show that Cartesian coordinates with explicit constraints lead to a 100x improvement in accuracy and data efficiency.
arXiv Detail & Related papers (2020-10-26T13:35:16Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.