Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle
Counting Power
- URL: http://arxiv.org/abs/2309.04941v3
- Date: Thu, 11 Jan 2024 04:35:07 GMT
- Title: Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle
Counting Power
- Authors: Junru Zhou, Jiarui Feng, Xiyuan Wang, Muhan Zhang
- Abstract summary: Ability of graph neural networks (GNNs) to count certain graph substructures, especially cycles, is important.
We propose a novel class of GNNs -- $d$-Distance-Restricted FWL(2) GNNs.
Our model is the most efficient GNN model to date that can count up to 6-cycles.
- Score: 27.929167547127207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability of graph neural networks (GNNs) to count certain graph
substructures, especially cycles, is important for the success of GNNs on a
wide range of tasks. It has been recently used as a popular metric for
evaluating the expressive power of GNNs. Many of the proposed GNN models with
provable cycle counting power are based on subgraph GNNs, i.e., extracting a
bag of subgraphs from the input graph, generating representations for each
subgraph, and using them to augment the representation of the input graph.
However, those methods require heavy preprocessing, and suffer from high time
and memory costs. In this paper, we overcome the aforementioned limitations of
subgraph GNNs by proposing a novel class of GNNs -- $d$-Distance-Restricted
FWL(2) GNNs, or $d$-DRFWL(2) GNNs. $d$-DRFWL(2) GNNs use node pairs whose
mutual distances are at most $d$ as the units for message passing to balance
the expressive power and complexity. By performing message passing among
distance-restricted node pairs in the original graph, $d$-DRFWL(2) GNNs avoid
the expensive subgraph extraction operations in subgraph GNNs, making both the
time and space complexity lower. We theoretically show that the discriminative
power of $d$-DRFWL(2) GNNs strictly increases as $d$ increases. More
importantly, $d$-DRFWL(2) GNNs have provably strong cycle counting power even
with $d=2$: they can count all 3, 4, 5, 6-cycles. Since 6-cycles (e.g., benzene
rings) are ubiquitous in organic molecules, being able to detect and count them
is crucial for achieving robust and generalizable performance on molecular
tasks. Experiments on both synthetic datasets and molecular datasets verify our
theory. To the best of our knowledge, our model is the most efficient GNN model
to date (both theoretically and empirically) that can count up to 6-cycles.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - An Efficient Subgraph GNN with Provable Substructure Counting Power [32.44487589958533]
We investigate the enhancement of graph neural networks' (GNNs) representation power through their ability in substructure counting.
Recent advances have seen the adoption of subgraph GNNs, which partition an input graph into numerous subgraphs, subsequently applying GNNs to each to augment the graph's overall representation.
Despite their ability to identify various substructures, subgraph GNNs are hindered by significant computational and memory costs.
arXiv Detail & Related papers (2023-03-19T05:35:59Z) - Some Might Say All You Need Is Sum [2.226803104060345]
The expressivity of Graph Neural Networks (GNNs) is dependent on the aggregation functions they employ.
We prove that basic functions, which can be computed exactly by Mean or Max GNNs, are inapproximable by any Sum GNN.
arXiv Detail & Related papers (2023-02-22T19:01:52Z) - Boosting the Cycle Counting Power of Graph Neural Networks with
I$^2$-GNNs [9.806910643086043]
We propose I$2$-GNNs to extend Subgraph MPNNs by assigning different identifiers for the root node and its neighbors in each subgraph.
I$2$-GNNs' discriminative power is shown to be strictly stronger than Subgraph MPNNs and partially stronger than the 3-WL test.
To the best of our knowledge, it is the first linear-time GNN model that can count 6-cycles with theoretical guarantees.
arXiv Detail & Related papers (2022-10-22T09:40:00Z) - Nested Graph Neural Networks [20.28732862748783]
Graph neural network (GNN)'s success in graph classification is closely related to the Weisfeiler-Lehman (1-WL) algorithm.
We propose Nested Graph Neural Networks (NGNNs) to represent a graph with rooted subgraphs instead of rooted subtrees.
arXiv Detail & Related papers (2021-10-25T18:22:24Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.