Learnable Commutative Monoids for Graph Neural Networks
- URL: http://arxiv.org/abs/2212.08541v1
- Date: Fri, 16 Dec 2022 15:43:41 GMT
- Title: Learnable Commutative Monoids for Graph Neural Networks
- Authors: Euan Ong and Petar Veli\v{c}kovi\'c
- Abstract summary: Graph neural networks (GNNs) are highly sensitive to the choice of aggregation function.
We show that GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators.
We propose a framework for constructing learnable, commutative, associative binary operators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have been shown to be highly sensitive to the
choice of aggregation function. While summing over a node's neighbours can
approximate any permutation-invariant function over discrete inputs,
Cohen-Karlik et al. [2020] proved there are set-aggregation problems for which
summing cannot generalise to unbounded inputs, proposing recurrent neural
networks regularised towards permutation-invariance as a more expressive
aggregator. We show that these results carry over to the graph domain: GNNs
equipped with recurrent aggregators are competitive with state-of-the-art
permutation-invariant aggregators, on both synthetic benchmarks and real-world
problems. However, despite the benefits of recurrent aggregators, their $O(V)$
depth makes them both difficult to parallelise and harder to train on large
graphs. Inspired by the observation that a well-behaved aggregator for a GNN is
a commutative monoid over its latent space, we propose a framework for
constructing learnable, commutative, associative binary operators. And with
this, we construct an aggregator of $O(\log V)$ depth, yielding exponential
improvements for both parallelism and dependency length while achieving
performance competitive with recurrent aggregators. Based on our empirical
observations, our proposed learnable commutative monoid (LCM) aggregator
represents a favourable tradeoff between efficient and expressive aggregators.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Going Deeper into Permutation-Sensitive Graph Neural Networks [6.685139672294716]
We devise an efficient permutation-sensitive aggregation mechanism via permutation groups.
We prove that our approach is strictly more powerful than the 2-dimensional Weisfeiler-Lehman (2-WL) graph isomorphism test.
arXiv Detail & Related papers (2022-05-28T08:22:10Z) - Exploiting Redundancy: Separable Group Convolutional Networks on Lie
Groups [14.029933823101084]
Group convolutional neural networks (G-CNNs) have been shown to increase parameter efficiency and model accuracy.
In this work, we investigate the properties of representations learned by regular G-CNNs, and show considerable parameter redundancy in group convolution kernels.
We introduce convolution kernels that are separable over the subgroup and channel dimensions.
arXiv Detail & Related papers (2021-10-25T15:56:53Z) - Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks [127.32203532517953]
We develop a vanilla 1-bit framework that binarizes both the GNN parameters and the graph features.
Despite the lightweight architecture, we observed that this vanilla framework suffered from insufficient discriminative power in distinguishing graph topologies.
This discovery motivates us to devise meta aggregators to improve the expressive power of vanilla binarized GNNs.
arXiv Detail & Related papers (2021-09-27T08:50:37Z) - Breaking the Expressive Bottlenecks of Graph Neural Networks [26.000304641965947]
Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressiveness of graph neural networks (GNNs)
In this paper, we improve the expressiveness by exploring powerful aggregators.
arXiv Detail & Related papers (2020-12-14T02:36:46Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.