Parameterized Hypercomplex Graph Neural Networks for Graph
Classification
- URL: http://arxiv.org/abs/2103.16584v1
- Date: Tue, 30 Mar 2021 18:01:06 GMT
- Title: Parameterized Hypercomplex Graph Neural Networks for Graph
Classification
- Authors: Tuan Le, Marco Bertolini, Frank No\'e, Djork-Arn\'e Clevert
- Abstract summary: We develop graph neural networks that leverage the properties of hypercomplex feature transformation.
In particular, in our proposed class of models, the multiplication rule specifying the algebra itself is inferred from the data during training.
We test our proposed hypercomplex GNN on several open graph benchmark datasets and show that our models reach state-of-the-art performance.
- Score: 1.1852406625172216
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite recent advances in representation learning in hypercomplex (HC)
space, this subject is still vastly unexplored in the context of graphs.
Motivated by the complex and quaternion algebras, which have been found in
several contexts to enable effective representation learning that inherently
incorporates a weight-sharing mechanism, we develop graph neural networks that
leverage the properties of hypercomplex feature transformation. In particular,
in our proposed class of models, the multiplication rule specifying the algebra
itself is inferred from the data during training. Given a fixed model
architecture, we present empirical evidence that our proposed model
incorporates a regularization effect, alleviating the risk of overfitting. We
also show that for fixed model capacity, our proposed method outperforms its
corresponding real-formulated GNN, providing additional confirmation for the
enhanced expressivity of HC embeddings. Finally, we test our proposed
hypercomplex GNN on several open graph benchmark datasets and show that our
models reach state-of-the-art performance while consuming a much lower memory
footprint with 70& fewer parameters. Our implementations are available at
https://github.com/bayer-science-for-a-better-life/phc-gnn.
Related papers
- Do Graph Neural Networks Work for High Entropy Alloys? [12.002942104379986]
High-entropy alloys (HEAs) lack chemical long-range order, limiting the applicability of current graph representations.
We introduce the LESets machine learning model, an accurate, interpretable GNN for HEA property prediction.
We demonstrate the accuracy of LESets in modeling the mechanical properties ofquaternary HEAs.
arXiv Detail & Related papers (2024-08-29T08:20:02Z) - Self-supervision meets kernel graph neural models: From architecture to
augmentations [36.388069423383286]
We improve the design and learning of kernel graph neural networks (KGNNs)
We develop a novel structure-preserving graph data augmentation method called latent graph augmentation (LGA)
Our proposed model achieves competitive performance comparable to or sometimes outperforming state-of-the-art graph representation learning frameworks.
arXiv Detail & Related papers (2023-10-17T14:04:22Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - HINormer: Representation Learning On Heterogeneous Information Networks
with Graph Transformer [29.217820912610602]
Graph Transformers (GTs) have been proposed which work in the paradigm that allows message passing to a larger coverage even across the whole graph.
The investigation of GTs on heterogeneous information networks (HINs) is still under-exploited.
We propose a novel model named HINormer, which capitalizes on a larger-range aggregation mechanism for node representation learning.
arXiv Detail & Related papers (2023-02-22T12:25:07Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Hyperbolic Variational Graph Neural Network for Modeling Dynamic Graphs [77.33781731432163]
We learn dynamic graph representation in hyperbolic space, for the first time, which aims to infer node representations.
We present a novel Hyperbolic Variational Graph Network, referred to as HVGNN.
In particular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on a theoretically grounded time encoding approach.
arXiv Detail & Related papers (2021-04-06T01:44:15Z) - E(n) Equivariant Graph Neural Networks [86.75170631724548]
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs)
In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance.
arXiv Detail & Related papers (2021-02-19T10:25:33Z) - Stochastic Graph Recurrent Neural Network [6.656993023468793]
We propose SGRNN, a novel neural architecture that applies latent variables to simultaneously capture evolution in node attributes and topology.
Specifically, deterministic states are separated from states in the iterative process to suppress mutual interference.
Experiments on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2020-09-01T16:14:30Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.