Graph Attention Multi-Layer Perceptron
- URL: http://arxiv.org/abs/2206.04355v1
- Date: Thu, 9 Jun 2022 08:56:11 GMT
- Title: Graph Attention Multi-Layer Perceptron
- Authors: Wentao Zhang, Ziqi Yin, Zeang Sheng, Yang Li, Wen Ouyang, Xiaosen Li,
Yangyu Tao, Zhi Yang, Bin Cui
- Abstract summary: We propose a new GNN architecture -- Graph Attention Multi-Layer Perceptron (GAMLP)
GAMLP captures the underlying correlations between different scales of graph knowledge.
It outperforms GAT by 1.3% regarding predictive accuracy on our large-scale Tencent Video dataset.
- Score: 17.669550943457768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved great success in many graph-based
applications. However, the enormous size and high sparsity level of graphs
hinder their applications under industrial scenarios. Although some scalable
GNNs are proposed for large-scale graphs, they adopt a fixed $K$-hop
neighborhood for each node, thus facing the over-smoothing issue when adopting
large propagation depths for nodes within sparse regions. To tackle the above
issue, we propose a new GNN architecture -- Graph Attention Multi-Layer
Perceptron (GAMLP), which can capture the underlying correlations between
different scales of graph knowledge. We have deployed GAMLP in Tencent with the
Angel platform, and we further evaluate GAMLP on both real-world datasets and
large-scale industrial datasets. Extensive experiments on these 14 graph
datasets demonstrate that GAMLP achieves state-of-the-art performance while
enjoying high scalability and efficiency. Specifically, it outperforms GAT by
1.3\% regarding predictive accuracy on our large-scale Tencent Video dataset
while achieving up to $50\times$ training speedup. Besides, it ranks top-1 on
both the leaderboards of the largest homogeneous and heterogeneous graph (i.e.,
ogbn-papers100M and ogbn-mag) of Open Graph Benchmark.
Related papers
- LPS-GNN : Deploying Graph Neural Networks on Graphs with 100-Billion Edges [22.66363194587289]
This paper introduces a scalable, low-cost, flexible, and efficient GNN framework called LPS-GNN.<n>It can perform representation learning on 100 billion graphs with a single GPU in 10 hours and shows a 13.8% improvement in User Acquisition scenarios.<n> LPS-GNN has been tested on public and real-world datasets, achieving performance lifts of 8. 24% to 13.89% over SOTA models in online applications.
arXiv Detail & Related papers (2025-07-19T10:44:26Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - GLISP: A Scalable GNN Learning System by Exploiting Inherent Structural
Properties of Graphs [5.410321469222541]
We propose GLISP, a sampling based GNN learning system for industrial scale graphs.
GLISP consists of three core components: graph partitioner, graph sampling service and graph inference engine.
Experiments show that GLISP achieves up to $6.53times$ and $70.77times$ speedups over existing GNN systems for training and inference tasks.
arXiv Detail & Related papers (2024-01-06T02:59:24Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.