A Clifford Algebraic Approach to E(n)-Equivariant High-order Graph Neural Networks
- URL: http://arxiv.org/abs/2410.04692v1
- Date: Mon, 7 Oct 2024 02:12:42 GMT
- Title: A Clifford Algebraic Approach to E(n)-Equivariant High-order Graph Neural Networks
- Authors: Hoang-Viet Tran, Thieu N. Vo, Tho Tran Huu, Tan Minh Nguyen,
- Abstract summary: We introduce the Clifford Group Equivariant Graph Neural Networks (CG-EGNNs)
CG-EGNNs enhance high-order message passing by integrating high-order local structures in the context of Clifford algebras.
We empirically validate that CG-EGNNs outperform previous methods on various benchmarks including n-body, CMU motion capture, and MD17.
- Score: 2.2543808018990443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing neural network architectures that can handle data symmetry is crucial. This is especially important for geometric graphs whose properties are equivariance under Euclidean transformations. Current equivariant graph neural networks (EGNNs), particularly those using message passing, have a limitation in expressive power. Recent high-order graph neural networks can overcome this limitation, yet they lack equivariance properties, representing a notable drawback in certain applications in chemistry and physical sciences. In this paper, we introduce the Clifford Group Equivariant Graph Neural Networks (CG-EGNNs), a novel EGNN that enhances high-order message passing by integrating high-order local structures in the context of Clifford algebras. As a key benefit of using Clifford algebras, CG-EGNN can learn functions that capture equivariance from positional features. By adopting the high-order message passing mechanism, CG-EGNN gains richer information from neighbors, thus improving model performance. Furthermore, we establish the universality property of the $k$-hop message passing framework, showcasing greater expressive power of CG-EGNNs with additional $k$-hop message passing mechanism. We empirically validate that CG-EGNNs outperform previous methods on various benchmarks including n-body, CMU motion capture, and MD17, highlighting their effectiveness in geometric deep learning.
Related papers
- Spatiotemporal Learning on Cell-embedded Graphs [6.8090864965073274]
We introduce a learnable cell attribution to the node-edge message passing process, which better captures the spatial dependency of regional features.
Experiments on various PDE systems and one real-world dataset demonstrate that CeGNN achieves superior performance compared with other baseline models.
arXiv Detail & Related papers (2024-09-26T16:22:08Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Factor Graph Neural Networks [20.211455592922736]
Graph Neural Networks (GNNs) can learn powerful representations in an end-to-end fashion with great success in many real-world applications.
We propose Factor Graph Neural Networks (FGNNs) to effectively capture higher-order relations for inference and learning.
arXiv Detail & Related papers (2023-08-02T00:32:02Z) - Demystifying Oversmoothing in Attention-Based Graph Neural Networks [23.853636836842604]
Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where increasing network depth leads to homogeneous node representations.
Previous work has established that Graph Convolutional Networks (GCNs) exponentially lose expressive power.
It remains controversial whether the graph attention mechanism can mitigate oversmoothing.
arXiv Detail & Related papers (2023-05-25T14:31:59Z) - Towards Understanding Graph Neural Networks: An Algorithm Unrolling
Perspective [9.426760895586428]
We introduce a class of unrolled networks built on truncated optimization algorithms for graph signal denoising problems.
The training process of a GNN model can be seen as solving a bilevel optimization problem with a GSD problem at the lower level.
An expressive model named UGDGNN, i.e., unrolled gradient descent GNN, is proposed which inherits appealing theoretical properties.
arXiv Detail & Related papers (2022-06-09T12:54:03Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Parameterized Hypercomplex Graph Neural Networks for Graph
Classification [1.1852406625172216]
We develop graph neural networks that leverage the properties of hypercomplex feature transformation.
In particular, in our proposed class of models, the multiplication rule specifying the algebra itself is inferred from the data during training.
We test our proposed hypercomplex GNN on several open graph benchmark datasets and show that our models reach state-of-the-art performance.
arXiv Detail & Related papers (2021-03-30T18:01:06Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.