Learning Graph Algorithms With Recurrent Graph Neural Networks
- URL: http://arxiv.org/abs/2212.04934v1
- Date: Fri, 9 Dec 2022 15:42:22 GMT
- Title: Learning Graph Algorithms With Recurrent Graph Neural Networks
- Authors: Florian Gr\"otschla, Jo\"el Mathys, Roger Wattenhofer
- Abstract summary: We focus on a recurrent architecture design that can learn simple graph problems end to end on smaller graphs and then extrapolate to larger instances.
We use (i) skip connections, (ii) state regularization, and (iii) edge convolutions to guide GNNs toward extrapolation.
- Score: 8.873449722727026
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Classical graph algorithms work well for combinatorial problems that can be
thoroughly formalized and abstracted. Once the algorithm is derived, it
generalizes to instances of any size. However, developing an algorithm that
handles complex structures and interactions in the real world can be
challenging. Rather than specifying the algorithm, we can try to learn it from
the graph-structured data. Graph Neural Networks (GNNs) are inherently capable
of working on graph structures; however, they struggle to generalize well, and
learning on larger instances is challenging. In order to scale, we focus on a
recurrent architecture design that can learn simple graph problems end to end
on smaller graphs and then extrapolate to larger instances. As our main
contribution, we identify three essential techniques for recurrent GNNs to
scale. By using (i) skip connections, (ii) state regularization, and (iii) edge
convolutions, we can guide GNNs toward extrapolation. This allows us to train
on small graphs and apply the same model to much larger graphs during
inference. Moreover, we empirically validate the extrapolation capabilities of
our GNNs on algorithmic datasets.
Related papers
- Learning on Large Graphs using Intersecting Communities [13.053266613831447]
MPNNs iteratively update each node's representation in an input graph by aggregating messages from the node's neighbors.
MPNNs might quickly become prohibitive for large graphs provided they are not very sparse.
We propose approximating the input graph as an intersecting community graph (ICG) -- a combination of intersecting cliques.
arXiv Detail & Related papers (2024-05-31T09:26:26Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Towards Scale-Invariant Graph-related Problem Solving by Iterative
Homogeneous Graph Neural Networks [39.370875358317946]
Current graph neural networks (GNNs) lack generalizability with respect to scales (graph sizes, graph diameters, edge weights, etc.) when solving many graph analysis problems.
We propose several extensions to address the issue. First, inspired by the dependency of the number of iteration of common graph theory algorithms on graph size, we learn to terminate the message passing process in GNNs adaptively according to the progress.
Second, inspired by the fact that many graph theory algorithms are homogeneous with respect to graph weights, we introduce homogeneous transformation layers that are universal homogeneous function approximators, to convert ordinary G
arXiv Detail & Related papers (2020-10-26T12:57:28Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - From Local Structures to Size Generalization in Graph Neural Networks [53.3202754533658]
Graph neural networks (GNNs) can process graphs of different sizes.
Their ability to generalize across sizes, specifically from small to large graphs, is still not well understood.
arXiv Detail & Related papers (2020-10-17T19:36:54Z) - Lifelong Graph Learning [6.282881904019272]
We bridge graph learning and lifelong learning by converting a continual graph learning problem to a regular graph learning problem.
We show that feature graph networks (FGN) achieve superior performance in two applications, i.e., lifelong human action recognition with wearable devices and feature matching.
arXiv Detail & Related papers (2020-09-01T18:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.