Generalised $f$-Mean Aggregation for Graph Neural Networks
- URL: http://arxiv.org/abs/2306.13826v1
- Date: Sat, 24 Jun 2023 00:39:12 GMT
- Title: Generalised $f$-Mean Aggregation for Graph Neural Networks
- Authors: Ryan Kortvelesy, Steven Morad, Amanda Prorok
- Abstract summary: We present GenAgg, a generalised aggregation operator, which parametrises a function space that includes all standard aggregators.
We show that GenAgg is able to represent the standard aggregators with much higher accuracy than baseline methods.
- Score: 7.22614468437919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Network (GNN) architectures are defined by their implementations
of update and aggregation modules. While many works focus on new ways to
parametrise the update modules, the aggregation modules receive comparatively
little attention. Because it is difficult to parametrise aggregation functions,
currently most methods select a "standard aggregator" such as $\mathrm{mean}$,
$\mathrm{sum}$, or $\mathrm{max}$. While this selection is often made without
any reasoning, it has been shown that the choice in aggregator has a
significant impact on performance, and the best choice in aggregator is
problem-dependent. Since aggregation is a lossy operation, it is crucial to
select the most appropriate aggregator in order to minimise information loss.
In this paper, we present GenAgg, a generalised aggregation operator, which
parametrises a function space that includes all standard aggregators. In our
experiments, we show that GenAgg is able to represent the standard aggregators
with much higher accuracy than baseline methods. We also show that using GenAgg
as a drop-in replacement for an existing aggregator in a GNN often leads to a
significant boost in performance across various tasks.
Related papers
- Learnable Commutative Monoids for Graph Neural Networks [0.0]
Graph neural networks (GNNs) are highly sensitive to the choice of aggregation function.
We show that GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators.
We propose a framework for constructing learnable, commutative, associative binary operators.
arXiv Detail & Related papers (2022-12-16T15:43:41Z) - Multi-scale Feature Aggregation for Crowd Counting [84.45773306711747]
We propose a multi-scale feature aggregation network (MSFANet)
MSFANet consists of two feature aggregation modules: the short aggregation (ShortAgg) and the skip aggregation (SkipAgg)
arXiv Detail & Related papers (2022-08-10T10:23:12Z) - IV-GNN : Interval Valued Data Handling Using Graph Neural Network [12.651341660194534]
Graph Neural Network (GNN) is a powerful tool to perform standard machine learning on graphs.
This article proposes an Interval-ValuedGraph Neural Network, a novel GNN model where, for the first time, we relax the restriction of the feature space being countable.
Our model is much more general than existing models as any countable set is always a subset of the universal set $Rn$, which is uncountable.
arXiv Detail & Related papers (2021-11-17T15:37:09Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks [127.32203532517953]
We develop a vanilla 1-bit framework that binarizes both the GNN parameters and the graph features.
Despite the lightweight architecture, we observed that this vanilla framework suffered from insufficient discriminative power in distinguishing graph topologies.
This discovery motivates us to devise meta aggregators to improve the expressive power of vanilla binarized GNNs.
arXiv Detail & Related papers (2021-09-27T08:50:37Z) - Breaking the Expressive Bottlenecks of Graph Neural Networks [26.000304641965947]
Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressiveness of graph neural networks (GNNs)
In this paper, we improve the expressiveness by exploring powerful aggregators.
arXiv Detail & Related papers (2020-12-14T02:36:46Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Particle-Gibbs Sampling For Bayesian Feature Allocation Models [77.57285768500225]
Most widely used MCMC strategies rely on an element wise Gibbs update of the feature allocation matrix.
We have developed a Gibbs sampler that can update an entire row of the feature allocation matrix in a single move.
This sampler is impractical for models with a large number of features as the computational complexity scales exponentially in the number of features.
We develop a Particle Gibbs sampler that targets the same distribution as the row wise Gibbs updates, but has computational complexity that only grows linearly in the number of features.
arXiv Detail & Related papers (2020-01-25T22:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.