Learning to Pool in Graph Neural Networks for Extrapolation
- URL: http://arxiv.org/abs/2106.06210v1
- Date: Fri, 11 Jun 2021 07:30:26 GMT
- Title: Learning to Pool in Graph Neural Networks for Extrapolation
- Authors: Jihoon Ko, Taehyung Kwon, Kijung Shin, Juho Lee
- Abstract summary: We present GNP, a $Lp$ norm-like pooling function that is trainable end-to-end for any given task.
We verify experimentally that simply replacing all pooling functions with GNP enables GNNs to extrapolate well on many node-level, graph-level, and set-related tasks.
- Score: 27.879099777205205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are one of the most popular approaches to using
deep learning on graph-structured data, and they have shown state-of-the-art
performances on a variety of tasks. However, according to a recent study, a
careful choice of pooling functions, which are used for the aggregation or
readout operation in GNNs, is crucial for enabling GNNs to extrapolate. Without
the ideal combination of pooling functions, which varies across tasks, GNNs
completely fail to generalize to out-of-distribution data, while the number of
possible combinations grows exponentially with the number of layers. In this
paper, we present GNP, a $L^p$ norm-like pooling function that is trainable
end-to-end for any given task. Notably, GNP generalizes most of the widely-used
pooling functions. We verify experimentally that simply replacing all pooling
functions with GNP enables GNNs to extrapolate well on many node-level,
graph-level, and set-related tasks; and GNP sometimes performs even better than
optimal combinations of existing pooling functions.
Related papers
- Efficient Mixed Precision Quantization in Graph Neural Networks [7.161966906570077]
Graph Neural Networks (GNNs) have become essential for handling large-scale graph applications.<n>Mixed precision quantization emerges as a promising solution to enhance the efficiency of GNN architectures.
arXiv Detail & Related papers (2025-05-14T13:11:39Z) - Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning [33.948899558876604]
This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs.
We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction.
arXiv Detail & Related papers (2024-10-08T05:27:34Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Separable Gaussian Neural Networks: Structure, Analysis, and Function
Approximations [2.17301816060102]
We propose a new feedforward network - Separable Gaussian Neural Network (SGNN)
SGNN takes advantage of the separable property of Gaussian functions, which splits data into multiple columns and sequentially feeds them into parallel layers.
experimentally demonstrated that SGNN can achieve 100 times speedup with a similar level of accuracy over GRBFNN.
arXiv Detail & Related papers (2023-08-13T03:54:30Z) - RF-GNN: Random Forest Boosted Graph Neural Network for Social Bot
Detection [10.690802468726078]
The presence of a large number of bots on social media leads to adverse effects.
This paper proposes a Random Forest boosted Graph Neural Network for social bot detection, called RF-GNN.
arXiv Detail & Related papers (2023-04-14T00:57:44Z) - Higher-order Sparse Convolutions in Graph Neural Networks [17.647346486710514]
We introduce a new higher-order sparse convolution based on the Sobolev norm of graph signals.
S-SobGNN shows competitive performance in all applications as compared to several state-of-the-art methods.
arXiv Detail & Related papers (2023-02-21T08:08:18Z) - Superiority of GNN over NN in generalizing bandlimited functions [6.3151583550712065]
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications.
In this study, we investigate the proficiency of GNNs for such classifications, which can also be cast as a function problem.
Our findings highlight a pronounced efficiency in utilizing GNNs to generalize a bandlimited function within an $varepsilon$-error margin.
arXiv Detail & Related papers (2022-06-13T05:15:12Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Non-Local Graph Neural Networks [60.28057802327858]
We propose a simple yet effective non-local aggregation framework with an efficient attention-guided sorting for GNNs.
We perform thorough experiments to analyze disassortative graph datasets and evaluate our non-local GNNs.
arXiv Detail & Related papers (2020-05-29T14:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.