Universally Invariant Learning in Equivariant GNNs
- URL: http://arxiv.org/abs/2510.13169v1
- Date: Wed, 15 Oct 2025 05:50:16 GMT
- Title: Universally Invariant Learning in Equivariant GNNs
- Authors: Jiacheng Cen, Anyi Li, Ning Lin, Tingyang Xu, Yu Rong, Deli Zhao, Zihe Wang, Wenbing Huang,
- Abstract summary: Equivariant Graph Neural Networks (GNNs) have demonstrated significant success across various applications.<n>We present a theoretically grounded framework for constructing complete equivariant GNNs.<n>Our results demonstrate superior completeness and excellent performance with only a few layers.
- Score: 47.74131538835446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Equivariant Graph Neural Networks (GNNs) have demonstrated significant success across various applications. To achieve completeness -- that is, the universal approximation property over the space of equivariant functions -- the network must effectively capture the intricate multi-body interactions among different nodes. Prior methods attain this via deeper architectures, augmented body orders, or increased degrees of steerable features, often at high computational cost and without polynomial-time solutions. In this work, we present a theoretically grounded framework for constructing complete equivariant GNNs that is both efficient and practical. We prove that a complete equivariant GNN can be achieved through two key components: 1) a complete scalar function, referred to as the canonical form of the geometric graph; and 2) a full-rank steerable basis set. Leveraging this finding, we propose an efficient algorithm for constructing complete equivariant GNNs based on two common models: EGNN and TFN. Empirical results demonstrate that our model demonstrates superior completeness and excellent performance with only a few layers, thereby significantly reducing computational overhead while maintaining strong practical efficacy.
Related papers
- Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks? [19.85431519907118]
Equivariant Graph Neural Networks (GNNs) that incorporate E(3) symmetry have achieved significant success in various scientific applications.<n>This paper explores the expressivity of equivariant GNNs on symmetric structures, including $k$-fold rotations and regular polyhedra.<n>We propose HEGNN, a high-degree version of EGNN to increase the expressivity by incorporating high-degree steerable vectors.
arXiv Detail & Related papers (2024-10-15T09:47:49Z) - Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance [34.322201578399394]
Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization.
We present Exploration-enhanced Equivariant Graph Neural Networks or E2GN2.
E2GN2 demonstrates a significant improvement in sample efficiency, greater final reward convergence, and a 2x-5x gain in over standard GNNs in our tests.
arXiv Detail & Related papers (2024-10-03T15:25:37Z) - Relaxing Continuous Constraints of Equivariant Graph Neural Networks for Physical Dynamics Learning [39.25135680793105]
We propose a general Discrete Equivariant Graph Neural Network (DEGNN) that guarantees equivariance to a given discrete point group.
Specifically, we show that such discrete equivariant message passing could be constructed by transforming geometric features into permutation-invariant embeddings.
We show that DEGNN is data efficient, learning with less data, and can generalize across scenarios such as unobserved orientation.
arXiv Detail & Related papers (2024-06-24T03:37:51Z) - Weisfeiler Leman for Euclidean Equivariant Machine Learning [3.0222726571099665]
We show that PPGN can simulate $2$-WL uniformly on all point clouds with low complexity.
Secondly, we show that $2$-WL tests can be extended to point clouds which include both positions and velocities.
arXiv Detail & Related papers (2024-02-04T13:25:18Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.