GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric
Polynomials
- URL: http://arxiv.org/abs/2209.15454v1
- Date: Fri, 30 Sep 2022 13:03:57 GMT
- Title: GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric
Polynomials
- Authors: Xun Liu, Alex Hay-Man Ng, Fangyuan Lei, Yikuan Zhang, Zhengmin Li
- Abstract summary: Graph Neural Networks (GNNs) are promising approaches for circumventing real-world problems on graph-structured data.
These models usually have at least one of four fundamental limitations: over-smoothing, over-fitting, difficult to train, and strong homophily assumption.
We identify a set of key designs including (D1) dilated convolution, (D2) multi-channel learning, (D3) self-attention score, and (D4) sign factor to boost learning from different types.
We theoretically analyze the model and show that it can approximate various graph filters by adjusting the self-attention score
- Score: 2.521781613847069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are a promising deep learning approach for
circumventing many real-world problems on graph-structured data. However, these
models usually have at least one of four fundamental limitations:
over-smoothing, over-fitting, difficult to train, and strong homophily
assumption. For example, Simple Graph Convolution (SGC) is known to suffer from
the first and fourth limitations. To tackle these limitations, we identify a
set of key designs including (D1) dilated convolution, (D2) multi-channel
learning, (D3) self-attention score, and (D4) sign factor to boost learning
from different types (i.e. homophily and heterophily) and scales (i.e. small,
medium, and large) of networks, and combine them into a graph neural network,
GPNet, a simple and efficient one-layer model. We theoretically analyze the
model and show that it can approximate various graph filters by adjusting the
self-attention score and sign factor. Experiments show that GPNet consistently
outperforms baselines in terms of average rank, average accuracy, complexity,
and parameters on semi-supervised and full-supervised tasks, and achieves
competitive performance compared to state-of-the-art model with inductive
learning task.
Related papers
- Tensor-view Topological Graph Neural Network [16.433092191206534]
Graph neural networks (GNNs) have recently gained growing attention in graph learning.
Existing GNNs only use local information from a very limited neighborhood around each node.
We propose a novel Topological Graph Neural Network (TTG-NN), a class of simple yet effective deep learning.
Real data experiments show that the proposed TTG-NN outperforms 20 state-of-the-art methods on various graph benchmarks.
arXiv Detail & Related papers (2024-01-22T14:55:01Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Improving Graph Neural Networks with Simple Architecture Design [7.057970273958933]
We introduce several key design strategies for graph neural networks.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN)
We show that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks.
arXiv Detail & Related papers (2021-05-17T06:46:01Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Learning Graph Neural Networks with Positive and Unlabeled Nodes [34.903471348798725]
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs.
Most GNN models aggregate information from short distances in each round, and fail to capture long distance relationship in graphs.
In this paper, we propose a novel graph neural network framework, long-short distance aggregation networks (LSDAN) to overcome these limitations.
arXiv Detail & Related papers (2021-03-08T11:43:37Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - How hard is to distinguish graphs with graph neural networks? [32.09819774228997]
This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN)
MPNN encompasses the majority of graph neural networks used today and is universal when nodes are given unique features.
An empirical study involving 12 graph classification tasks and 420 networks reveals strong alignment between actual performance and theoretical predictions.
arXiv Detail & Related papers (2020-05-13T22:28:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.