A Topological characterisation of Weisfeiler-Leman equivalence classes
- URL: http://arxiv.org/abs/2206.11876v1
- Date: Thu, 23 Jun 2022 17:28:55 GMT
- Title: A Topological characterisation of Weisfeiler-Leman equivalence classes
- Authors: Jacob Bamberger
- Abstract summary: Graph Neural Networks (GNNs) are learning models aimed at processing graphs and signals on graphs.
In this article, we rely on the theory of covering spaces to fully characterize the classes of graphs that GNNs cannot distinguish.
We show that the number of indistinguishable graphs in our dataset grows super-exponentially with the number of nodes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are learning models aimed at processing graphs
and signals on graphs. The most popular and successful GNNs are based on
message passing schemes. Such schemes inherently have limited expressive power
when it comes to distinguishing two non-isomorphic graphs. In this article, we
rely on the theory of covering spaces to fully characterize the classes of
graphs that GNNs cannot distinguish. We then generate arbitrarily many
non-isomorphic graphs that cannot be distinguished by GNNs, leading to the
GraphCovers dataset. We also show that the number of indistinguishable graphs
in our dataset grows super-exponentially with the number of nodes. Finally, we
test the GraphCovers dataset on several GNN architectures, showing that none of
them can distinguish any two graphs it contains.
Related papers
- Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Graph Neural Networks Use Graphs When They Shouldn't [29.686091109844746]
Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data.
We show that GNNs actually tend to overfit the given graph-structure.
arXiv Detail & Related papers (2023-09-08T13:59:18Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Incorporating Heterophily into Graph Neural Networks for Graph Classification [6.709862924279403]
Graph Neural Networks (GNNs) often assume strong homophily for graph classification, seldom considering heterophily.
We develop a novel GNN architecture called IHGNN (short for Incorporating Heterophily into Graph Neural Networks)
We empirically validate IHGNN on various graph datasets and demonstrate that it outperforms the state-of-the-art GNNs for graph classification.
arXiv Detail & Related papers (2022-03-15T06:48:35Z) - Graph Neural Networks for Graphs with Heterophily: A Survey [98.45621222357397]
We provide a comprehensive review of graph neural networks (GNNs) for heterophilic graphs.
Specifically, we propose a systematic taxonomy that essentially governs existing heterophilic GNN models.
We discuss the correlation between graph heterophily and various graph research domains, aiming to facilitate the development of more effective GNNs.
arXiv Detail & Related papers (2022-02-14T23:07:47Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Incomplete Graph Representation and Learning via Partial Graph Neural
Networks [7.227805463462352]
In many applications, graph may be coming in an incomplete form where attributes of graph nodes are partially unknown/missing.
Existing GNNs are generally designed on complete graphs which can not deal with attribute-incomplete graph data directly.
We develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs) for attribute-incomplete graph representation and learning.
arXiv Detail & Related papers (2020-03-23T08:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.