Graph Neural Networks: Architectures, Stability and Transferability
- URL: http://arxiv.org/abs/2008.01767v3
- Date: Fri, 29 Jan 2021 18:52:54 GMT
- Title: Graph Neural Networks: Architectures, Stability and Transferability
- Authors: Luana Ruiz, Fernando Gama, Alejandro Ribeiro
- Abstract summary: Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
- Score: 176.3960927323358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are information processing architectures for
signals supported on graphs. They are presented here as generalizations of
convolutional neural networks (CNNs) in which individual layers contain banks
of graph convolutional filters instead of banks of classical convolutional
filters. Otherwise, GNNs operate as CNNs. Filters are composed with pointwise
nonlinearities and stacked in layers. It is shown that GNN architectures
exhibit equivariance to permutation and stability to graph deformations. These
properties help explain the good performance of GNNs that can be observed
empirically. It is also shown that if graphs converge to a limit object, a
graphon, GNNs converge to a corresponding limit object, a graphon neural
network. This convergence justifies the transferability of GNNs across networks
with different number of nodes. Concepts are illustrated by the application of
GNNs to recommendation systems, decentralized collaborative control, and
wireless communication networks.
Related papers
- Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Graphon Neural Networks and the Transferability of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) rely on graph convolutions to extract local features from network data.
We introduce graphon NNs as limit objects of GNNs and prove a bound on the difference between the output of a GNN and its limit graphon-NN.
This result establishes a tradeoff between discriminability and transferability of GNNs.
arXiv Detail & Related papers (2020-06-05T16:41:08Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.