SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases
- URL: http://arxiv.org/abs/2308.13212v2
- Date: Tue, 12 Mar 2024 07:49:44 GMT
- Title: SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases
- Authors: Yang Liu, Jiashun Cheng, Haihong Zhao, Tingyang Xu, Peilin Zhao, Fugee
Tsung, Jia Li, Yu Rong
- Abstract summary: We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
- Score: 66.61789780666727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) with equivariant properties have emerged as
powerful tools for modeling complex dynamics of multi-object physical systems.
However, their generalization ability is limited by the inadequate
consideration of physical inductive biases: (1) Existing studies overlook the
continuity of transitions among system states, opting to employ several
discrete transformation layers to learn the direct mapping between two adjacent
states; (2) Most models only account for first-order velocity information,
despite the fact that many physical systems are governed by second-order motion
laws. To incorporate these inductive biases, we propose the Second-order
Equivariant Graph Neural Ordinary Differential Equation (SEGNO). Specifically,
we show how the second-order continuity can be incorporated into GNNs while
maintaining the equivariant property. Furthermore, we offer theoretical
insights into SEGNO, highlighting that it can learn a unique trajectory between
adjacent states, which is crucial for model generalization. Additionally, we
prove that the discrepancy between this learned trajectory of SEGNO and the
true trajectory is bounded. Extensive experiments on complex dynamical systems
including molecular dynamics and motion capture demonstrate that our model
yields a significant improvement over the state-of-the-art baselines.
Related papers
- Relaxing Continuous Constraints of Equivariant Graph Neural Networks for Physical Dynamics Learning [39.25135680793105]
We propose a general Discrete Equivariant Graph Neural Network (DEGNN) that guarantees equivariance to a given discrete point group.
Specifically, we show that such discrete equivariant message passing could be constructed by transforming geometric features into permutation-invariant embeddings.
We show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation.
arXiv Detail & Related papers (2024-06-24T03:37:51Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems [5.787429262238507]
We evaluate the performance of graph neural networks (GNNs) and their variants with explicit constraints and different architectures.
Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance.
All the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
arXiv Detail & Related papers (2022-11-10T12:29:30Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Equivariant Graph Mechanics Networks with Constraints [83.38709956935095]
We propose Graph Mechanics Network (GMN) which is efficient, equivariant and constraint-aware.
GMN represents, by generalized coordinates, the forward kinematics information (positions and velocities) of a structural object.
Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency.
arXiv Detail & Related papers (2022-03-12T14:22:14Z) - Linearization and Identification of Multiple-Attractors Dynamical System
through Laplacian Eigenmaps [8.161497377142584]
We propose a Graph-based spectral clustering method that takes advantage of a velocity-augmented kernel to connect data-points belonging to the same dynamics.
We prove that there always exist a set of 2-dimensional embedding spaces in which the sub-dynamics are linear, and n-dimensional embedding where they are quasi-linear.
We learn a diffeomorphism from the Laplacian embedding space to the original space and show that the Laplacian embedding leads to good reconstruction accuracy and a faster training time.
arXiv Detail & Related papers (2022-02-18T12:43:25Z) - On Second Order Behaviour in Augmented Neural ODEs [69.8070643951126]
We consider Second Order Neural ODEs (SONODEs)
We show how the adjoint sensitivity method can be extended to SONODEs.
We extend the theoretical understanding of the broader class of Augmented NODEs (ANODEs)
arXiv Detail & Related papers (2020-06-12T14:25:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.