Generalizing Aggregation Functions in GNNs:High-Capacity GNNs via
Nonlinear Neighborhood Aggregators
- URL: http://arxiv.org/abs/2202.09145v1
- Date: Fri, 18 Feb 2022 11:49:59 GMT
- Title: Generalizing Aggregation Functions in GNNs:High-Capacity GNNs via
Nonlinear Neighborhood Aggregators
- Authors: Beibei Wang and Bo Jiang
- Abstract summary: Graph neural networks (GNNs) have achieved great success in many graph learning tasks.
Existing GNNs mainly adopt either linear neighborhood aggregation (mean,sum) or max aggregator in their message propagation.
We re-think the message propagation mechanism in GNNs and aim to develop the general nonlinear aggregators for neighborhood information aggregation in GNNs.
- Score: 14.573383849211773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved great success in many graph
learning tasks. The main aspect powering existing GNNs is the multi-layer
network architecture to learn the nonlinear graph representations for the
specific learning tasks. The core operation in GNNs is message propagation in
which each node updates its representation by aggregating its neighbors'
representations. Existing GNNs mainly adopt either linear neighborhood
aggregation (mean,sum) or max aggregator in their message propagation. (1) For
linear aggregators, the whole nonlinearity and network's capacity of GNNs are
generally limited due to deeper GNNs usually suffer from over-smoothing issue.
(2) For max aggregator, it usually fails to be aware of the detailed
information of node representations within neighborhood. To overcome these
issues, we re-think the message propagation mechanism in GNNs and aim to
develop the general nonlinear aggregators for neighborhood information
aggregation in GNNs. One main aspect of our proposed nonlinear aggregators is
that they provide the optimally balanced aggregators between max and mean/sum
aggregations. Thus, our aggregators can inherit both (i) high nonlinearity that
increases network's capacity and (ii) detail-sensitivity that preserves the
detailed information of representations together in GNNs' message propagation.
Promising experiments on several datasets show the effectiveness of the
proposed nonlinear aggregators.
Related papers
- GGNNs : Generalizing GNNs using Residual Connections and Weighted
Message Passing [0.0]
GNNs excel at capturing relationships and patterns within graphs, enabling effective learning and prediction tasks.
It is commonly believed that the generalizing power of GNNs is attributed to the message-passing mechanism between layers.
Our technique builds on these results, modifying the message-passing mechanism further: one by weighing the messages before accumulating at each node and another by adding Residual connections.
arXiv Detail & Related papers (2023-11-26T22:22:38Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - AdaGNN: A multi-modal latent representation meta-learner for GNNs based
on AdaBoosting [0.38073142980733]
Graph Neural Networks (GNNs) focus on extracting intrinsic network features.
We propose boosting-based meta learner for GNNs.
AdaGNN performs exceptionally well for applications with rich and diverse node neighborhood information.
arXiv Detail & Related papers (2021-08-14T03:07:26Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Enhance Information Propagation for Graph Neural Network by
Heterogeneous Aggregations [7.3136594018091134]
Graph neural networks are emerging as continuation of deep learning success w.r.t. graph data.
We propose to enhance information propagation among GNN layers by combining heterogeneous aggregations.
We empirically validate the effectiveness of HAG-Net on a number of graph classification benchmarks.
arXiv Detail & Related papers (2021-02-08T08:57:56Z) - Identity-aware Graph Neural Networks [63.6952975763946]
We develop a class of message passing Graph Neural Networks (ID-GNNs) with greater expressive power than the 1-WL test.
ID-GNN extends existing GNN architectures by inductively considering nodes' identities during message passing.
We show that transforming existing GNNs to ID-GNNs yields on average 40% accuracy improvement on challenging node, edge, and graph property prediction tasks.
arXiv Detail & Related papers (2021-01-25T18:59:01Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.