Natural Graph Networks
- URL: http://arxiv.org/abs/2007.08349v2
- Date: Mon, 23 Nov 2020 15:38:46 GMT
- Title: Natural Graph Networks
- Authors: Pim de Haan, Taco Cohen, Max Welling
- Abstract summary: We show that the more general concept of naturality is sufficient for a graph network to be well-defined.
We define global and local natural graph networks, the latter of which are as scalable as conventional message passing graph neural networks.
- Score: 80.77570956520482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key requirement for graph neural networks is that they must process a graph
in a way that does not depend on how the graph is described. Traditionally this
has been taken to mean that a graph network must be equivariant to node
permutations. Here we show that instead of equivariance, the more general
concept of naturality is sufficient for a graph network to be well-defined,
opening up a larger class of graph networks. We define global and local natural
graph networks, the latter of which are as scalable as conventional message
passing graph neural networks while being more flexible. We give one practical
instantiation of a natural network on graphs which uses an equivariant message
network parameterization, yielding good performance on several benchmarks.
Related papers
- GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Saliency-Aware Regularized Graph Neural Network [39.82009838086267]
We propose the Saliency-Aware Regularized Graph Neural Network (SAR-GNN) for graph classification.
We first estimate the global node saliency by measuring the semantic similarity between the compact graph representation and node features.
Then the learned saliency distribution is leveraged to regularize the neighborhood aggregation of the backbone.
arXiv Detail & Related papers (2024-01-01T13:44:16Z) - Sampling for network function learning [0.0]
We consider the feasibility of graph sampling approach to network function learning.
This can be useful either when the edges are unknown to start with or the graph is too large (or dynamic) to be processed entirely.
arXiv Detail & Related papers (2022-09-11T11:22:34Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks [16.14454388348814]
We present a class of universal, permutation-equivariant graph networks.
They offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph.
These architectures lead to vastly reduced computation times compared to standard higher-order graph networks.
arXiv Detail & Related papers (2022-03-25T21:17:09Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - BiGCN: A Bi-directional Low-Pass Filtering Graph Neural Network [35.97496022085212]
Many graph convolutional networks can be regarded as low-pass filters for graph signals.
We propose a new model, BiGCN, which represents a graph neural network as a bi-directional low-pass filter.
Our model outperforms previous graph neural networks in the tasks of node classification and link prediction on most of the benchmark datasets.
arXiv Detail & Related papers (2021-01-14T09:41:00Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z) - Theoretically Expressive and Edge-aware Graph Learning [24.954342094176013]
We propose a new Graph Neural Network that combines recent advancements in the field.
We prove that the model is strictly more general than the Graph Isomorphism Network and the Gated Graph Neural Network.
arXiv Detail & Related papers (2020-01-24T13:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.