Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems
- URL: http://arxiv.org/abs/2211.05520v1
- Date: Thu, 10 Nov 2022 12:29:30 GMT
- Title: Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems
- Authors: Abishek Thangamuthu, Gunjan Kumar, Suresh Bishnoi, Ravinder Bhattoo, N
M Anoop Krishnan, Sayan Ranu
- Abstract summary: We evaluate the performance of graph neural networks (GNNs) and their variants with explicit constraints and different architectures.
Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance.
All the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
- Score: 5.787429262238507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, graph neural networks have been gaining a lot of attention to
simulate dynamical systems due to their inductive nature leading to zero-shot
generalizability. Similarly, physics-informed inductive biases in deep-learning
frameworks have been shown to give superior performance in learning the
dynamics of physical systems. There is a growing volume of literature that
attempts to combine these two approaches. Here, we evaluate the performance of
thirteen different graph neural networks, namely, Hamiltonian and Lagrangian
graph neural networks, graph neural ODE, and their variants with explicit
constraints and different architectures. We briefly explain the theoretical
formulation highlighting the similarities and differences in the inductive
biases and graph architecture of these systems. We evaluate these models on
spring, pendulum, gravitational, and 3D deformable solid systems to compare the
performance in terms of rollout error, conserved quantities such as energy and
momentum, and generalizability to unseen system sizes. Our study demonstrates
that GNNs with additional inductive biases, such as explicit constraints and
decoupling of kinetic and potential energies, exhibit significantly enhanced
performance. Further, all the physics-informed GNNs exhibit zero-shot
generalizability to system sizes an order of magnitude larger than the training
system, thus providing a promising route to simulate large-scale realistic
systems.
Related papers
- Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Signed Graph Neural Ordinary Differential Equation for Modeling
Continuous-time Dynamics [13.912268915939656]
The prevailing approach of integrating graph neural networks with ordinary differential equations has demonstrated promising performance.
We introduce a novel approach: a signed graph neural ordinary differential equation, adeptly addressing the limitations of miscapturing signed information.
Our proposed solution boasts both flexibility and efficiency.
arXiv Detail & Related papers (2023-12-18T13:45:33Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - Graph Neural Stochastic Differential Equations for Learning Brownian
Dynamics [6.362339104761225]
We propose a framework namely Brownian graph neural networks (BROGNET) to learn Brownian dynamics directly from the trajectory.
We show that BROGNET conserves the linear momentum of the system, which in turn, provides superior performance on learning dynamics.
arXiv Detail & Related papers (2023-06-20T10:30:46Z) - E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid
Mechanics [2.1401663582288144]
We demonstrate that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models.
We benchmark two well-studied fluid flow systems, namely the 3D decaying Taylor-Green vortex and the 3D reverse Poiseuille flow.
arXiv Detail & Related papers (2023-03-31T21:56:35Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Enhancing the Inductive Biases of Graph Neural ODE for Modeling Dynamical Systems [19.634451472032733]
We present a graph based neural ODE, GNODE, to learn the time evolution of dynamical systems.
We show that, similar to LNN and HNN, encoding the constraints explicitly can significantly improve the training efficiency and performance of GNODE.
We demonstrate that inducing these biases can enhance the performance of model by orders of magnitude in terms of both energy violation and rollout error.
arXiv Detail & Related papers (2022-09-22T02:20:29Z) - Learning the Dynamics of Particle-based Systems with Lagrangian Graph
Neural Networks [5.560715621814096]
We present a framework, namely, Lagrangian graph neural network (LGnn), that provides a strong inductive bias to learn the Lagrangian of a particle-based system directly from the trajectory.
We show the zero-shot generalizability of the system by simulating systems two orders of magnitude larger than the trained one and also hybrid systems that are unseen by the model.
arXiv Detail & Related papers (2022-09-03T18:38:17Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.