GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric
Polynomials
- URL: http://arxiv.org/abs/2209.15454v1
- Date: Fri, 30 Sep 2022 13:03:57 GMT
- Title: GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric
Polynomials
- Authors: Xun Liu, Alex Hay-Man Ng, Fangyuan Lei, Yikuan Zhang, Zhengmin Li
- Abstract summary: Graph Neural Networks (GNNs) are promising approaches for circumventing real-world problems on graph-structured data.
These models usually have at least one of four fundamental limitations: over-smoothing, over-fitting, difficult to train, and strong homophily assumption.
We identify a set of key designs including (D1) dilated convolution, (D2) multi-channel learning, (D3) self-attention score, and (D4) sign factor to boost learning from different types.
We theoretically analyze the model and show that it can approximate various graph filters by adjusting the self-attention score
- Score: 2.521781613847069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are a promising deep learning approach for
circumventing many real-world problems on graph-structured data. However, these
models usually have at least one of four fundamental limitations:
over-smoothing, over-fitting, difficult to train, and strong homophily
assumption. For example, Simple Graph Convolution (SGC) is known to suffer from
the first and fourth limitations. To tackle these limitations, we identify a
set of key designs including (D1) dilated convolution, (D2) multi-channel
learning, (D3) self-attention score, and (D4) sign factor to boost learning
from different types (i.e. homophily and heterophily) and scales (i.e. small,
medium, and large) of networks, and combine them into a graph neural network,
GPNet, a simple and efficient one-layer model. We theoretically analyze the
model and show that it can approximate various graph filters by adjusting the
self-attention score and sign factor. Experiments show that GPNet consistently
outperforms baselines in terms of average rank, average accuracy, complexity,
and parameters on semi-supervised and full-supervised tasks, and achieves
competitive performance compared to state-of-the-art model with inductive
learning task.
Related papers
- DeltaGNN: Graph Neural Network with Information Flow Control [5.563171090433323]
Graph Neural Networks (GNNs) are designed to process graph-structured data through neighborhood aggregations in the message passing process.
Message-passing enables GNNs to understand short-range spatial interactions, but also causes them to suffer from over-smoothing and over-squashing.
We propose a mechanism called emph information flow control to address over-smoothing and over-squashing with linear computational overhead.
We benchmark our model across 10 real-world datasets, including graphs with varying sizes, topologies, densities, and homophilic ratios, showing superior performance
arXiv Detail & Related papers (2025-01-10T14:34:20Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Scale Invariance of Graph Neural Networks [4.002604752467421]
We address two fundamental challenges in Graph Neural Networks (GNNs)
We propose ScaleNet, a unified network architecture that achieves state-of-the-art performance across four homophilic and two heterophilic benchmark datasets.
For another popular GNN approach to digraphs, we demonstrate the equivalence between Hermitian Laplacian methods and GraphSAGE with incidence normalization.
arXiv Detail & Related papers (2024-11-28T22:06:06Z) - Tensor-view Topological Graph Neural Network [16.433092191206534]
Graph neural networks (GNNs) have recently gained growing attention in graph learning.
Existing GNNs only use local information from a very limited neighborhood around each node.
We propose a novel Topological Graph Neural Network (TTG-NN), a class of simple yet effective deep learning.
Real data experiments show that the proposed TTG-NN outperforms 20 state-of-the-art methods on various graph benchmarks.
arXiv Detail & Related papers (2024-01-22T14:55:01Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Graph Neural Networks with Adaptive Frequency Response Filter [55.626174910206046]
We develop a graph neural network framework AdaGNN with a well-smooth adaptive frequency response filter.
We empirically validate the effectiveness of the proposed framework on various benchmark datasets.
arXiv Detail & Related papers (2021-04-26T19:31:21Z) - Learning Graph Neural Networks with Positive and Unlabeled Nodes [34.903471348798725]
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs.
Most GNN models aggregate information from short distances in each round, and fail to capture long distance relationship in graphs.
In this paper, we propose a novel graph neural network framework, long-short distance aggregation networks (LSDAN) to overcome these limitations.
arXiv Detail & Related papers (2021-03-08T11:43:37Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.