Rethinking the Expressive Power of GNNs via Graph Biconnectivity
- URL: http://arxiv.org/abs/2301.09505v3
- Date: Sun, 11 Feb 2024 03:44:23 GMT
- Title: Rethinking the Expressive Power of GNNs via Graph Biconnectivity
- Authors: Bohang Zhang, Shengjie Luo, Liwei Wang, Di He
- Abstract summary: We introduce a novel class of expressivity metrics via graph biconnectivity and highlight their importance in both theory and practice.
We introduce a principled and more efficient approach, called the Generalized Distance Weisfeiler-Lehman (GD-WL)
Practically, we show GD-WL can be implemented by a Transformer-like architecture that preserves and enjoys full parallelizability.
- Score: 45.4674360883544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing expressive Graph Neural Networks (GNNs) is a central topic in
learning graph-structured data. While numerous approaches have been proposed to
improve GNNs in terms of the Weisfeiler-Lehman (WL) test, generally there is
still a lack of deep understanding of what additional power they can
systematically and provably gain. In this paper, we take a fundamentally
different perspective to study the expressive power of GNNs beyond the WL test.
Specifically, we introduce a novel class of expressivity metrics via graph
biconnectivity and highlight their importance in both theory and practice. As
biconnectivity can be easily calculated using simple algorithms that have
linear computational costs, it is natural to expect that popular GNNs can learn
it easily as well. However, after a thorough review of prior GNN architectures,
we surprisingly find that most of them are not expressive for any of these
metrics. The only exception is the ESAN framework, for which we give a
theoretical justification of its power. We proceed to introduce a principled
and more efficient approach, called the Generalized Distance Weisfeiler-Lehman
(GD-WL), which is provably expressive for all biconnectivity metrics.
Practically, we show GD-WL can be implemented by a Transformer-like
architecture that preserves expressiveness and enjoys full parallelizability. A
set of experiments on both synthetic and real datasets demonstrates that our
approach can consistently outperform prior GNN architectures.
Related papers
- From Relational Pooling to Subgraph GNNs: A Universal Framework for More
Expressive Graph Neural Networks [8.121462458089141]
We show how to assign labels to nodes to improve expressive power of message passing neural networks.
We experimentally prove that our method is universally compatible and capable of improving the expressivity of any base GNN model.
Our $k,l$-GNNs achieve superior performance on many synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-08T18:00:50Z) - Equivariant Polynomials for Graph Neural Networks [38.15983687193912]
Graph Networks (GNN) are inherently limited in their expressive power.
This paper introduces an alternative power hierarchy based on the ability of GNNs to calculate equivariants of certain degree.
These enhanced GNNs demonstrate state-of-the-art results in experiments across multiple graph learning benchmarks.
arXiv Detail & Related papers (2023-02-22T18:53:38Z) - A Complete Expressiveness Hierarchy for Subgraph GNNs via Subgraph
Weisfeiler-Lehman Tests [30.558563815430418]
Subgraph neural networks (GNNs) have emerged as an important direction for developing expressive graph neural networks (GNNs)
This paper conducts a systematic study of general node-based subgraph GNNs through the lens of Subgraph Weisfeiler-Lehman Tests (SWL)
We prove that any node-based subgraph GNN falls into one of the six SWL equivalence classes, among which $mathsfSSWL$ achieves the maximal expressive power.
arXiv Detail & Related papers (2023-02-14T14:42:54Z) - A Practical, Progressively-Expressive GNN [27.267362661067285]
Message passing neural networks (MPNNs) have become a dominant flavor of graph neural networks (GNNs) in recent years.
MPNNs come with notable limitations; they are at most as powerful as the 1-dimensional Weisfeiler-Leman (1-WL) test in distinguishing graphs in a graph isomorphism testing frame-work.
We propose the (k, c)(=)-SETWL hierarchy with greatly reduced complexity from k-WL, achieved by moving from k-tuples of nodes to sets with =k nodes defined over =c connected components in
arXiv Detail & Related papers (2022-10-18T01:27:21Z) - Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis [124.97061497512804]
We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
arXiv Detail & Related papers (2022-05-19T18:40:25Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.