Equivariant Graph Hierarchy-Based Neural Networks
- URL: http://arxiv.org/abs/2202.10643v1
- Date: Tue, 22 Feb 2022 03:11:47 GMT
- Title: Equivariant Graph Hierarchy-Based Neural Networks
- Authors: Jiaqi Han, Yu Rong, Tingyang Xu, Fuchun Sun, Wenbing Huang
- Abstract summary: We propose Equivariant Hierarchy-based Graph Networks (EGHNs)
EGHNs consist of the three key components: generalized Equivariant Matrix Message Passing (EMMP), E-Pool and E-UpPool.
Considerable experimental evaluations verify the effectiveness of our EGHN on several applications including multi-object dynamics simulation, motion capture, and protein dynamics modeling.
- Score: 53.60804845045526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Equivariant Graph neural Networks (EGNs) are powerful in characterizing the
dynamics of multi-body physical systems. Existing EGNs conduct flat message
passing, which, yet, is unable to capture the spatial/dynamical hierarchy for
complex systems particularly, limiting substructure discovery and global
information fusion. In this paper, we propose Equivariant Hierarchy-based Graph
Networks (EGHNs) which consist of the three key components: generalized
Equivariant Matrix Message Passing (EMMP) , E-Pool and E-UpPool. In particular,
EMMP is able to improve the expressivity of conventional equivariant message
passing, E-Pool assigns the quantities of the low-level nodes into high-level
clusters, while E-UpPool leverages the high-level information to update the
dynamics of the low-level nodes. As their names imply, both E-Pool and E-UpPool
are guaranteed to be equivariant to meet physic symmetry. Considerable
experimental evaluations verify the effectiveness of our EGHN on several
applications including multi-object dynamics simulation, motion capture, and
protein dynamics modeling.
Related papers
- Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Incorporating Arbitrary Matrix Group Equivariance into KANs [69.30866522377694]
Kolmogorov-Arnold Networks (KANs) have seen great success in scientific domains.
However, spline functions may not respect symmetry in tasks, which is crucial prior knowledge in machine learning.
We propose Equivariant Kolmogorov-Arnold Networks (EKAN) to broaden their applicability to more fields.
arXiv Detail & Related papers (2024-10-01T06:34:58Z) - Spatiotemporal Learning on Cell-embedded Graphs [6.8090864965073274]
We introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features.
Experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics [32.115887916401036]
We develop an equivariant version of Fourier-temporal GNNs to represent and simulate dynamics of physical systems.
We evaluate our model on three real datasets corresponding to the molecular-, protein- and macro-level.
arXiv Detail & Related papers (2024-05-21T15:33:21Z) - Spline-based neural network interatomic potentials: blending classical
and machine learning models [0.0]
We introduce a new MLIP framework which blends the simplicity of spline-based MEAM potentials with the flexibility of a neural network architecture.
We demonstrate how this framework can be used to probe the boundary between classical and ML IPs.
arXiv Detail & Related papers (2023-10-04T15:42:26Z) - Spatial Attention Kinetic Networks with E(n)-Equivariance [0.951828574518325]
Neural networks that are equivariant to rotations, translations, reflections, and permutations on n-dimensional geometric space have shown promise in physical modeling.
We propose a simple alternative functional form that uses neurally parametrized linear combinations of edge vectors to achieve equivariance.
We design spatial attention kinetic networks with E(n)-equivariance, or SAKE, which are competitive in many-body system modeling tasks while being significantly faster.
arXiv Detail & Related papers (2023-01-21T05:14:29Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.