Improving the Expressiveness of $K$-hop Message-Passing GNNs by Injecting Contextualized Substructure Information
- URL: http://arxiv.org/abs/2406.19244v1
- Date: Thu, 27 Jun 2024 15:10:56 GMT
- Title: Improving the Expressiveness of $K$-hop Message-Passing GNNs by Injecting Contextualized Substructure Information
- Authors: Tianjun Yao, Yiongxu Wang, Kun Zhang, Shangsong Liang,
- Abstract summary: We propose textitsubstructure encoding function to uplift the expressive power of any $K$-hop message-passing GNN.
Our method is provably more powerful than previous works on $K$-hop graph neural networks and 1-WL subgraph GNNs.
- Score: 17.56609419806051
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have become the \textit{de facto} standard for representational learning in graphs, and have achieved state-of-the-art performance in many graph-related tasks; however, it has been shown that the expressive power of standard GNNs are equivalent maximally to 1-dimensional Weisfeiler-Lehman (1-WL) Test. Recently, there is a line of works aiming to enhance the expressive power of graph neural networks. One line of such works aim at developing $K$-hop message-passing GNNs where node representation is updated by aggregating information from not only direct neighbors but all neighbors within $K$-hop of the node. Another line of works leverages subgraph information to enhance the expressive power which is proven to be strictly more powerful than 1-WL test. In this work, we discuss the limitation of $K$-hop message-passing GNNs and propose \textit{substructure encoding function} to uplift the expressive power of any $K$-hop message-passing GNN. We further inject contextualized substructure information to enhance the expressiveness of $K$-hop message-passing GNNs. Our method is provably more powerful than previous works on $K$-hop graph neural networks and 1-WL subgraph GNNs, which is a specific type of subgraph based GNN models, and not less powerful than 3-WL. Empirically, our proposed method set new state-of-the-art performance or achieves comparable performance for a variety of datasets. Our code is available at \url{https://github.com/tianyao-aka/Expresive_K_hop_GNNs}.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - From Relational Pooling to Subgraph GNNs: A Universal Framework for More
Expressive Graph Neural Networks [8.121462458089141]
We show how to assign labels to nodes to improve expressive power of message passing neural networks.
We experimentally prove that our method is universally compatible and capable of improving the expressivity of any base GNN model.
Our $k,l$-GNNs achieve superior performance on many synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-08T18:00:50Z) - From Local to Global: Spectral-Inspired Graph Neural Networks [28.858773653743075]
Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data.
MPNNs are message-passing algorithms that aggregate and combine signals in a local graph neighborhood.
MPNNs can suffer from issues like over-smoothing or over-squashing.
arXiv Detail & Related papers (2022-09-24T17:19:00Z) - Twin Weisfeiler-Lehman: High Expressive GNNs for Graph Classification [48.087302573188396]
We propose a novel graph isomorphism test method, namely Twin-WL, which simultaneously passes node labels and node identities.
We prove that the two Twin-GNNs both have higher expressive power than traditional message passing GNNs.
arXiv Detail & Related papers (2022-03-22T12:58:03Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Walk Message Passing Neural Networks and Second-Order Graph Neural
Networks [4.355567556995855]
We introduce a new type of MPNN, $ell$-walk MPNNs, which aggregate features along walks of length $ell$ between vertices.
We show that $2$-walk MPNNs match 2-WL in expressive power.
In particular, to match W[$ell$] in expressive power, we allow $ell-1$ matrix multiplications in each layer.
arXiv Detail & Related papers (2020-06-16T20:24:01Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.