SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks
- URL: http://arxiv.org/abs/2203.13913v1
- Date: Fri, 25 Mar 2022 21:17:09 GMT
- Title: SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks
- Authors: Christopher Morris, Gaurav Rattan, Sandra Kiefer, and Siamak
Ravanbakhsh
- Abstract summary: We present a class of universal, permutation-equivariant graph networks.
They offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph.
These architectures lead to vastly reduced computation times compared to standard higher-order graph networks.
- Score: 16.14454388348814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While (message-passing) graph neural networks have clear limitations in
approximating permutation-equivariant functions over graphs or general
relational data, more expressive, higher-order graph neural networks do not
scale to large graphs. They either operate on $k$-order tensors or consider all
$k$-node subgraphs, implying an exponential dependence on $k$ in memory
requirements, and do not adapt to the sparsity of the graph. By introducing new
heuristics for the graph isomorphism problem, we devise a class of universal,
permutation-equivariant graph networks, which, unlike previous architectures,
offer a fine-grained control between expressivity and scalability and adapt to
the sparsity of the graph. These architectures lead to vastly reduced
computation times compared to standard higher-order graph networks in the
supervised node- and graph-level classification and regression regime while
significantly improving over standard graph neural network and graph kernel
architectures in terms of predictive performance.
Related papers
- Geometric instability of graph neural networks on large graphs [0.0]
We analyse the geometric instability of embeddings produced by graph neural networks (GNNs)
We propose a simple, efficient and graph-native Graph Gram Index (GGI) to measure such instability.
This allows us to study the varying instability behaviour of GNN embeddings on large graphs for both node classification and link prediction.
arXiv Detail & Related papers (2023-08-19T20:10:54Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Generalization in Graph Neural Networks: Improved PAC-Bayesian Bounds on
Graph Diffusion [17.70434437597516]
We present generalization bounds that scale with the largest singular value of the graph neural network's feature diffusion matrix.
These bounds are numerically much smaller than prior bounds for real-world graphs.
We measure the stability of graph neural networks against noise perturbations using Hessians.
arXiv Detail & Related papers (2023-02-09T05:54:17Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Permutation-Invariant Variational Autoencoder for Graph-Level
Representation Learning [0.0]
We propose a permutation-invariant variational autoencoder for graph structured data.
Our model indirectly learns to match the node ordering of input and output graph, without imposing a particular node ordering.
We demonstrate the effectiveness of our proposed model on various graph reconstruction and generation tasks.
arXiv Detail & Related papers (2021-04-20T09:44:41Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Natural Graph Networks [80.77570956520482]
We show that the more general concept of naturality is sufficient for a graph network to be well-defined.
We define global and local natural graph networks, the latter of which are as scalable as conventional message passing graph neural networks.
arXiv Detail & Related papers (2020-07-16T14:19:06Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.