Towards Scale-Invariant Graph-related Problem Solving by Iterative
Homogeneous Graph Neural Networks
- URL: http://arxiv.org/abs/2010.13547v1
- Date: Mon, 26 Oct 2020 12:57:28 GMT
- Title: Towards Scale-Invariant Graph-related Problem Solving by Iterative
Homogeneous Graph Neural Networks
- Authors: Hao Tang, Zhiao Huang, Jiayuan Gu, Bao-Liang Lu, Hao Su
- Abstract summary: Current graph neural networks (GNNs) lack generalizability with respect to scales (graph sizes, graph diameters, edge weights, etc.) when solving many graph analysis problems.
We propose several extensions to address the issue. First, inspired by the dependency of the number of iteration of common graph theory algorithms on graph size, we learn to terminate the message passing process in GNNs adaptively according to the progress.
Second, inspired by the fact that many graph theory algorithms are homogeneous with respect to graph weights, we introduce homogeneous transformation layers that are universal homogeneous function approximators, to convert ordinary G
- Score: 39.370875358317946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current graph neural networks (GNNs) lack generalizability with respect to
scales (graph sizes, graph diameters, edge weights, etc..) when solving many
graph analysis problems. Taking the perspective of synthesizing graph theory
programs, we propose several extensions to address the issue. First, inspired
by the dependency of the iteration number of common graph theory algorithms on
graph size, we learn to terminate the message passing process in GNNs
adaptively according to the computation progress. Second, inspired by the fact
that many graph theory algorithms are homogeneous with respect to graph
weights, we introduce homogeneous transformation layers that are universal
homogeneous function approximators, to convert ordinary GNNs to be homogeneous.
Experimentally, we show that our GNN can be trained from small-scale graphs but
generalize well to large-scale graphs for a number of basic graph theory
problems. It also shows generalizability for applications of multi-body
physical simulation and image-based navigation problems.
Related papers
- Generalization of Geometric Graph Neural Networks [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)
We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.
The most important observation is that the generalization capability can be realized with one large graph instead of being limited to the size of the graph as in previous results.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Geometric instability of graph neural networks on large graphs [0.0]
We analyse the geometric instability of embeddings produced by graph neural networks (GNNs)
We propose a simple, efficient and graph-native Graph Gram Index (GGI) to measure such instability.
This allows us to study the varying instability behaviour of GNN embeddings on large graphs for both node classification and link prediction.
arXiv Detail & Related papers (2023-08-19T20:10:54Z) - Learning Graph Algorithms With Recurrent Graph Neural Networks [8.873449722727026]
We focus on a recurrent architecture design that can learn simple graph problems end to end on smaller graphs and then extrapolate to larger instances.
We use (i) skip connections, (ii) state regularization, and (iii) edge convolutions to guide GNNs toward extrapolation.
arXiv Detail & Related papers (2022-12-09T15:42:22Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - The Exact Class of Graph Functions Generated by Graph Neural Networks [43.25172578943894]
Graph Neural Network (GNN) whose output is identical to the graph function?
In this paper, we fully answer this question and characterize the class of graph problems that can be represented by GNNs.
We show that this condition can be efficiently verified by checking quadratically many constraints.
arXiv Detail & Related papers (2022-02-17T18:54:27Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - MathNet: Haar-Like Wavelet Multiresolution-Analysis for Graph
Representation and Learning [31.42901131602713]
We propose a framework for graph neural networks with multiresolution Haar-like wavelets, or MathNet, with interrelated convolution and pooling strategies.
The proposed MathNet outperforms various existing GNN models, especially on big data sets.
arXiv Detail & Related papers (2020-07-22T05:00:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.