Transferability Properties of Graph Neural Networks
- URL: http://arxiv.org/abs/2112.04629v4
- Date: Mon, 7 Aug 2023 21:06:18 GMT
- Title: Transferability Properties of Graph Neural Networks
- Authors: Luana Ruiz, Luiz F. O. Chamon, Alejandro Ribeiro
- Abstract summary: Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
- Score: 125.71771240180654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are composed of layers consisting of graph
convolutions and pointwise nonlinearities. Due to their invariance and
stability properties, GNNs are provably successful at learning representations
from data supported on moderate-scale graphs. However, they are difficult to
learn on large-scale graphs. In this paper, we study the problem of training
GNNs on graphs of moderate size and transferring them to large-scale graphs. We
use graph limits called graphons to define limit objects for graph filters and
GNNs -- graphon filters and graphon neural networks (WNNs) -- which we
interpret as generative models for graph filters and GNNs. We then show that
graphon filters and WNNs can be approximated by graph filters and GNNs sampled
from them on weighted and stochastic graphs. Because the error of these
approximations can be upper bounded, by a triangle inequality argument we can
further bound the error of transferring a graph filter or a GNN across graphs.
Our results show that (i) the transference error decreases with the graph size,
and (ii) that graph filters have a transferability-discriminability tradeoff
that in GNNs is alleviated by the scattering behavior of the nonlinearity.
These findings are demonstrated empirically in a movie recommendation problem
and in a decentralized control task.
Related papers
- Graph Neural Networks Use Graphs When They Shouldn't [29.686091109844746]
Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data.
We show that GNNs actually tend to overfit the given graph-structure.
arXiv Detail & Related papers (2023-09-08T13:59:18Z) - Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - KerGNNs: Interpretable Graph Neural Networks with Graph Kernels [14.421535610157093]
Graph neural networks (GNNs) have become the state-of-the-art method in downstream graph-related tasks.
We propose a novel GNN framework, termed textit Kernel Graph Neural Networks (KerGNNs)
KerGNNs integrate graph kernels into the message passing process of GNNs.
We show that our method achieves competitive performance compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2022-01-03T06:16:30Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.