Expressiveness and Approximation Properties of Graph Neural Networks
- URL: http://arxiv.org/abs/2204.04661v1
- Date: Sun, 10 Apr 2022 11:33:04 GMT
- Title: Expressiveness and Approximation Properties of Graph Neural Networks
- Authors: Floris Geerts, Juan L. Reutter
- Abstract summary: We provide an elegant way to obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests.
We use tensor language to define Higher-Order Message-Passing Neural Networks (or k-MPNNs), a natural extension of MPNNs.
Our approach provides a toolbox with which GNN architecture designers can analyze the separation power of their GNNs.
- Score: 6.1323142294300625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Characterizing the separation power of graph neural networks (GNNs) provides
an understanding of their limitations for graph learning tasks. Results
regarding separation power are, however, usually geared at specific GNN
architectures, and tools for understanding arbitrary GNN architectures are
generally lacking. We provide an elegant way to easily obtain bounds on the
separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which
have become the yardstick to measure the separation power of GNNs. The crux is
to view GNNs as expressions in a procedural tensor language describing the
computations in the layers of the GNNs. Then, by a simple analysis of the
obtained expressions, in terms of the number of indexes and the nesting depth
of summations, bounds on the separation power in terms of the WL-tests readily
follow. We use tensor language to define Higher-Order Message-Passing Neural
Networks (or k-MPNNs), a natural extension of MPNNs. Furthermore, the tensor
language point of view allows for the derivation of universality results for
classes of GNNs in a natural way. Our approach provides a toolbox with which
GNN architecture designers can analyze the separation power of their GNNs,
without needing to know the intricacies of the WL-tests. We also provide
insights in what is needed to boost the separation power of GNNs.
Related papers
- MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Generalization Limits of Graph Neural Networks in Identity Effects
Learning [12.302336258860116]
Graph Neural Networks (GNNs) have emerged as a powerful tool for data-driven learning on various graph domains.
We establish new generalization properties and fundamental limits of GNNs in the context of learning so-called identity effects.
Our study is motivated by the need to understand the capabilities of GNNs when performing simple cognitive tasks.
arXiv Detail & Related papers (2023-06-30T20:56:38Z) - Equivariant Polynomials for Graph Neural Networks [38.15983687193912]
Graph Networks (GNN) are inherently limited in their expressive power.
This paper introduces an alternative power hierarchy based on the ability of GNNs to calculate equivariants of certain degree.
These enhanced GNNs demonstrate state-of-the-art results in experiments across multiple graph learning benchmarks.
arXiv Detail & Related papers (2023-02-22T18:53:38Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Edge-Level Explanations for Graph Neural Networks by Extending
Explainability Methods for Convolutional Neural Networks [33.20913249848369]
Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction.
We extend explainability methods for CNNs, such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs.
The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the
arXiv Detail & Related papers (2021-11-01T06:27:29Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Expressive Power of Invariant and Equivariant Graph Neural Networks [10.419350129060598]
We show that Folklore Graph Neural Networks (FGNN) are the most expressive architectures proposed so far for a given tensor order.
FGNNs are able to learn how to solve the problem, leading to much better average performances than existing algorithms.
arXiv Detail & Related papers (2020-06-28T16:35:45Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.