Walking Out of the Weisfeiler Leman Hierarchy: Graph Learning Beyond
Message Passing
- URL: http://arxiv.org/abs/2102.08786v3
- Date: Mon, 21 Aug 2023 01:14:25 GMT
- Title: Walking Out of the Weisfeiler Leman Hierarchy: Graph Learning Beyond
Message Passing
- Authors: Jan T\"onshoff, Martin Ritzert, Hinrikus Wolf, Martin Grohe
- Abstract summary: We propose CRaWl, a novel neural network architecture for graph learning.
CRaWl operates fundamentally different from message passing graph neural networks.
We prove that the expressiveness of CRaWl is incomparable with that of the Weisfeiler Leman algorithm.
- Score: 4.272016212825404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose CRaWl, a novel neural network architecture for graph learning.
Like graph neural networks, CRaWl layers update node features on a graph and
thus can freely be combined or interleaved with GNN layers. Yet CRaWl operates
fundamentally different from message passing graph neural networks. CRaWl
layers extract and aggregate information on subgraphs appearing along random
walks through a graph using 1D Convolutions. Thereby it detects long range
interactions and computes non-local features. As the theoretical basis for our
approach, we prove a theorem stating that the expressiveness of CRaWl is
incomparable with that of the Weisfeiler Leman algorithm and hence with graph
neural networks. That is, there are functions expressible by CRaWl, but not by
GNNs and vice versa. This result extends to higher levels of the Weisfeiler
Leman hierarchy and thus to higher-order GNNs. Empirically, we show that CRaWl
matches state-of-the-art GNN architectures across a multitude of benchmark
datasets for classification and regression on graphs.
Related papers
- Graph as a feature: improving node classification with non-neural graph-aware logistic regression [2.952177779219163]
Graph-aware Logistic Regression (GLR) is a non-neural model designed for node classification tasks.
Unlike traditional graph algorithms that use only a fraction of the information accessible to GNNs, our proposed model simultaneously leverages both node features and the relationships between entities.
arXiv Detail & Related papers (2024-11-19T08:32:14Z) - Harnessing Collective Structure Knowledge in Data Augmentation for Graph Neural Networks [25.12261412297796]
Graph neural networks (GNNs) have achieved state-of-the-art performance in graph representation learning.
We propose a novel approach, namely collective structure knowledge-augmented graph neural network (CoS-GNN)
arXiv Detail & Related papers (2024-05-17T08:50:00Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network [72.16255675586089]
We propose an Adaptive Curvature Exploration Hyperbolic Graph NeuralNetwork named ACE-HGNN to adaptively learn the optimal curvature according to the input graph and downstream tasks.
Experiments on multiple real-world graph datasets demonstrate a significant and consistent performance improvement in model quality with competitive performance and good generalization ability.
arXiv Detail & Related papers (2021-10-15T07:18:57Z) - Topological Graph Neural Networks [14.349152231293928]
We present TOGL, a novel layer that incorporates global topological information of a graph using persistent homology.
Augmenting GNNs with our layer leads to beneficial predictive performance, both on synthetic data sets and on real-world data.
arXiv Detail & Related papers (2021-02-15T20:27:56Z) - Hierarchical Message-Passing Graph Neural Networks [12.207978823927386]
We propose a novel Hierarchical Message-passing Graph Neural Networks framework.
Key idea is generating a hierarchical structure that re-organises all nodes in a flat graph into multi-level super graphs.
We present the first model to implement this framework, termed Hierarchical Community-aware Graph Neural Network (HC-GNN)
arXiv Detail & Related papers (2020-09-08T13:11:07Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Higher-Order Explanations of Graph Neural Networks via Relevant Walks [3.1510406584101776]
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data.
In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions.
We extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
arXiv Detail & Related papers (2020-06-05T17:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.