On the Universality of Graph Neural Networks on Large Random Graphs
- URL: http://arxiv.org/abs/2105.13099v2
- Date: Fri, 28 May 2021 20:23:31 GMT
- Title: On the Universality of Graph Neural Networks on Large Random Graphs
- Authors: Nicolas Keriven, Alberto Bietti, Samuel Vaiter
- Abstract summary: We study the approximation power of Graph Neural Networks (GNNs) on latent position random graphs.
In the large graph limit, GNNs are known to converge to certain "continuous" models known as c-GNNs.
We show that c-SGNNs are strictly more powerful than c-GNNs in the continuous limit.
- Score: 22.720758067273195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the approximation power of Graph Neural Networks (GNNs) on latent
position random graphs. In the large graph limit, GNNs are known to converge to
certain "continuous" models known as c-GNNs, which directly enables a study of
their approximation power on random graph models. In the absence of input node
features however, just as GNNs are limited by the Weisfeiler-Lehman isomorphism
test, c-GNNs will be severely limited on simple random graph models. For
instance, they will fail to distinguish the communities of a well-separated
Stochastic Block Model (SBM) with constant degree function. Thus, we consider
recently proposed architectures that augment GNNs with unique node identifiers,
referred to as Structural GNNs here (SGNNs). We study the convergence of SGNNs
to their continuous counterpart (c-SGNNs) in the large random graph limit,
under new conditions on the node identifiers. We then show that c-SGNNs are
strictly more powerful than c-GNNs in the continuous limit, and prove their
universality on several random graph models of interest, including most SBMs
and a large class of random geometric graphs. Our results cover both
permutation-invariant and permutation-equivariant architectures.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z) - Random Features Strengthen Graph Neural Networks [40.60905158071766]
Graph neural networks (GNNs) are powerful machine learning models for various graph learning tasks.
In this paper, we demonstrate that GNNs become powerful just by adding a random feature to each node.
We show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs) cannot solve.
arXiv Detail & Related papers (2020-02-08T12:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.