A Practical, Progressively-Expressive GNN
- URL: http://arxiv.org/abs/2210.09521v1
- Date: Tue, 18 Oct 2022 01:27:21 GMT
- Title: A Practical, Progressively-Expressive GNN
- Authors: Lingxiao Zhao, Louis H\"artel, Neil Shah, Leman Akoglu
- Abstract summary: Message passing neural networks (MPNNs) have become a dominant flavor of graph neural networks (GNNs) in recent years.
MPNNs come with notable limitations; they are at most as powerful as the 1-dimensional Weisfeiler-Leman (1-WL) test in distinguishing graphs in a graph isomorphism testing frame-work.
We propose the (k, c)(=)-SETWL hierarchy with greatly reduced complexity from k-WL, achieved by moving from k-tuples of nodes to sets with =k nodes defined over =c connected components in
- Score: 27.267362661067285
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Message passing neural networks (MPNNs) have become a dominant flavor of
graph neural networks (GNNs) in recent years. Yet, MPNNs come with notable
limitations; namely, they are at most as powerful as the 1-dimensional
Weisfeiler-Leman (1-WL) test in distinguishing graphs in a graph isomorphism
testing frame-work. To this end, researchers have drawn inspiration from the
k-WL hierarchy to develop more expressive GNNs. However, current
k-WL-equivalent GNNs are not practical for even small values of k, as k-WL
becomes combinatorially more complex as k grows. At the same time, several
works have found great empirical success in graph learning tasks without highly
expressive models, implying that chasing expressiveness with a coarse-grained
ruler of expressivity like k-WL is often unneeded in practical tasks. To truly
understand the expressiveness-complexity tradeoff, one desires a more
fine-grained ruler, which can more gradually increase expressiveness. Our work
puts forth such a proposal: Namely, we first propose the (k, c)(<=)-SETWL
hierarchy with greatly reduced complexity from k-WL, achieved by moving from
k-tuples of nodes to sets with <=k nodes defined over <=c connected components
in the induced original graph. We show favorable theoretical results for this
model in relation to k-WL, and concretize it via (k, c)(<=)-SETGNN, which is as
expressive as (k, c)(<=)-SETWL. Our model is practical and
progressively-expressive, increasing in power with k and c. We demonstrate
effectiveness on several benchmark datasets, achieving several state-of-the-art
results with runtime and memory usage applicable to practical graphs. We open
source our implementation at https://github.com/LingxiaoShawn/KCSetGNN.
Related papers
- How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - Equivariant Polynomials for Graph Neural Networks [38.15983687193912]
Graph Networks (GNN) are inherently limited in their expressive power.
This paper introduces an alternative power hierarchy based on the ability of GNNs to calculate equivariants of certain degree.
These enhanced GNNs demonstrate state-of-the-art results in experiments across multiple graph learning benchmarks.
arXiv Detail & Related papers (2023-02-22T18:53:38Z) - Rethinking the Expressive Power of GNNs via Graph Biconnectivity [45.4674360883544]
We introduce a novel class of expressivity metrics via graph biconnectivity and highlight their importance in both theory and practice.
We introduce a principled and more efficient approach, called the Generalized Distance Weisfeiler-Lehman (GD-WL)
Practically, we show GD-WL can be implemented by a Transformer-like architecture that preserves and enjoys full parallelizability.
arXiv Detail & Related papers (2023-01-23T15:58:59Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - 1-WL Expressiveness Is (Almost) All You Need [3.4012007729454807]
It has been shown that a message passing neural networks (MPNNs) are at most as expressive as the first-order Weisfeiler-Leman (1-WL) graph isomorphism test.
In this work, we analyze if the limited expressiveness is actually a limiting factor for MPNNs and other WL-based models in standard graph datasets.
arXiv Detail & Related papers (2022-02-21T12:05:06Z) - Weisfeiler and Lehman Go Cellular: CW Networks [3.0017241250121383]
Graph Neural Networks (GNNs) are limited in their expressive power, struggle with long-range interactions and lack a principled way to model higher-order structures.
We extend recent theoretical results on SCs to regular Cell Complexes, topological objects that flexibly subsume SCs and graphs.
We show that this generalisation provides a powerful set of graph lifting'' transformations, each leading to a unique message passing procedure.
arXiv Detail & Related papers (2021-06-23T17:59:16Z) - Counting Substructures with Higher-Order Graph Neural Networks:
Possibility and Impossibility Results [58.277290855841976]
We study tradeoffs of computational cost and expressive power of Graph Neural Networks (GNNs)
We show that a new model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs.
In several cases, the proposed algorithm can greatly reduce computational complexity compared to the existing higher-order $k$-GNNs.
arXiv Detail & Related papers (2020-12-06T03:42:54Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Can Graph Neural Networks Count Substructures? [53.256112515435355]
We study the power of graph neural networks (GNNs) via their ability to count attributed graph substructures.
We distinguish between two types of substructure counting: inducedsubgraph-count and subgraphcount-count, and both positive and negative answers for popular GNN architectures.
arXiv Detail & Related papers (2020-02-10T18:53:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.