Transferability of Graph Neural Networks: an Extended Graphon Approach
- URL: http://arxiv.org/abs/2109.10096v1
- Date: Tue, 21 Sep 2021 10:59:01 GMT
- Title: Transferability of Graph Neural Networks: an Extended Graphon Approach
- Authors: Sohir Maskey, Ron Levie and Gitta Kutyniok
- Abstract summary: We study spectral graph convolutional neural networks (GCNNs)
In this paper, we consider a model of transferability based on graphon analysis.
- Score: 3.042325619220694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study spectral graph convolutional neural networks (GCNNs), where filters
are defined as continuous functions of the graph shift operator (GSO) through
functional calculus. A spectral GCNN is not tailored to one specific graph and
can be transferred between different graphs. It is hence important to study the
GCNN transferability: the capacity of the network to have approximately the
same repercussion on different graphs that represent the same phenomenon.
Transferability ensures that GCNNs trained on certain graphs generalize if the
graphs in the test set represent the same phenomena as the graphs in the
training set. In this paper, we consider a model of transferability based on
graphon analysis. Graphons are limit objects of graphs, and, in the graph
paradigm, two graphs represent the same phenomenon if both approximate the same
graphon. Our main contributions can be summarized as follows: 1) we prove that
any fixed GCNN with continuous filters is transferable under graphs that
approximate the same graphon, 2) we prove transferability for graphs that
approximate unbounded graphon shift operators, which are defined in this paper,
and, 3) we obtain non-asymptotic approximation results, proving linear
stability of GCNNs. This extends current state-of-the-art results which show
asymptotic transferability for polynomial filters under graphs that approximate
bounded graphons.
Related papers
- Theoretical Insights into Line Graph Transformation on Graph Learning [3.0574700762497744]
Line graph transformation has been widely studied in graph theory, where each node in a line graph corresponds to an edge in the original graph.
This has inspired a series of graph neural networks (GNNs) applied to transformed line graphs, which have proven effective in various graph representation learning tasks.
In this study, we focus on two types of graphs known to be challenging to the Weisfeiler-Leman (WL) tests: Cai-F"urer-Immerman (CFI) graphs and strongly regular graphs.
arXiv Detail & Related papers (2024-10-21T16:04:50Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Stable and Transferable Hyper-Graph Neural Networks [95.07035704188984]
We introduce an architecture for processing signals supported on hypergraphs via graph neural networks (GNNs)
We provide a framework for bounding the stability and transferability error of GNNs across arbitrary graphs via spectral similarity.
arXiv Detail & Related papers (2022-11-11T23:44:20Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Graph and graphon neural network stability [122.06927400759021]
Graph networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of network data.
We analyze GNN stability using kernel objects called graphons.
arXiv Detail & Related papers (2020-10-23T16:55:56Z) - Graphon Neural Networks and the Transferability of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) rely on graph convolutions to extract local features from network data.
We introduce graphon NNs as limit objects of GNNs and prove a bound on the difference between the output of a GNN and its limit graphon-NN.
This result establishes a tradeoff between discriminability and transferability of GNNs.
arXiv Detail & Related papers (2020-06-05T16:41:08Z) - Graphon Pooling in Graph Neural Networks [169.09536309161314]
Graph neural networks (GNNs) have been used effectively in different applications involving the processing of signals on irregular structures modeled by graphs.
We propose a new strategy for pooling and sampling on GNNs using graphons which preserves the spectral properties of the graph.
arXiv Detail & Related papers (2020-03-03T21:04:20Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.