Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis
- URL: http://arxiv.org/abs/2205.09801v3
- Date: Fri, 21 Jul 2023 22:30:27 GMT
- Title: Representation Power of Graph Neural Networks: Improved Expressivity via
Algebraic Analysis
- Authors: Charilaos I. Kanatsoulis and Alejandro Ribeiro
- Abstract summary: We show that standard Graph Neural Networks (GNNs) produce more discriminative representations than the Weisfeiler-Lehman (WL) algorithm.
We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph.
- Score: 124.97061497512804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable success of Graph Neural Networks (GNNs), the common
belief is that their representation power is limited and that they are at most
as expressive as the Weisfeiler-Lehman (WL) algorithm. In this paper, we argue
the opposite and show that standard GNNs, with anonymous inputs, produce more
discriminative representations than the WL algorithm. Our novel analysis
employs linear algebraic tools and characterizes the representation power of
GNNs with respect to the eigenvalue decomposition of the graph operators. We
prove that GNNs are able to generate distinctive outputs from white
uninformative inputs, for, at least, all graphs that have different
eigenvalues. We also show that simple convolutional architectures with white
inputs, produce equivariant features that count the closed paths in the graph
and are provably more expressive than the WL representations. Thorough
experimental analysis on graph isomorphism and graph classification datasets
corroborates our theoretical results and demonstrates the effectiveness of the
proposed approach.
Related papers
- A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - GOAt: Explaining Graph Neural Networks via Graph Output Attribution [32.66251068600664]
This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features.
GOAt is faithful, discriminative, as well as stable across similar samples.
We show that our method outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric.
arXiv Detail & Related papers (2024-01-26T00:32:58Z) - Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Generalization Limits of Graph Neural Networks in Identity Effects
Learning [12.302336258860116]
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains.
We establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects.
Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks.
arXiv Detail & Related papers (2023-06-30T20:56:38Z) - Weisfeiler-Lehman goes Dynamic: An Analysis of the Expressive Power of Graph Neural Networks for Attributed and Dynamic Graphs [1.3757956340051607]
Graph Neural Networks (GNNs) are a large class of relational models for graph processing.
Recent studies on the expressive power of GNNs have focused on their ability to distinguish graphs.
Real-life applications often involve a much larger variety of graph types.
arXiv Detail & Related papers (2022-10-08T10:14:41Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.