Algorithm and System Co-design for Efficient Subgraph-based Graph
Representation Learning
- URL: http://arxiv.org/abs/2202.13538v1
- Date: Mon, 28 Feb 2022 04:29:22 GMT
- Title: Algorithm and System Co-design for Efficient Subgraph-based Graph
Representation Learning
- Authors: Haoteng Yin, Muhan Zhang, Yanbang Wang, Jianguo Wang, Pan Li
- Abstract summary: Subgraph-based graph representation learning (SGRL) has been recently proposed to deal with some fundamental challenges encountered by canonical graph neural networks (GNNs)
We propose a novel framework SUREL for scalable SGRL by co-designing the learning algorithm and its system support.
- Score: 16.170895692951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subgraph-based graph representation learning (SGRL) has been recently
proposed to deal with some fundamental challenges encountered by canonical
graph neural networks (GNNs), and has demonstrated advantages in many important
data science applications such as link, relation and motif prediction. However,
current SGRL approaches suffer from a scalability issue since they require
extracting subgraphs for each training and testing query. Recent solutions that
scale up canonical GNNs may not apply to SGRL. Here, we propose a novel
framework SUREL for scalable SGRL by co-designing the learning algorithm and
its system support. SUREL adopts walk-based decomposition of subgraphs and
reuses the walks to form subgraphs, which substantially reduces the redundancy
of subgraph extraction and supports parallel computation. Experiments over
seven homogeneous, heterogeneous and higher-order graphs with millions of nodes
and edges demonstrate the effectiveness and scalability of SUREL. In
particular, compared to SGRL baselines, SUREL achieves 10$\times$ speed-up with
comparable or even better prediction performance; while compared to canonical
GNNs, SUREL achieves 50% prediction accuracy improvement. SUREL is also applied
to the brain vessel prediction task. SUREL significantly outperforms the
state-of-the-art baseline in both prediction accuracy and efficiency.
Related papers
- Faster Inference Time for GNNs using coarsening [1.323700980948722]
coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation.
No previous research has tackled the cost during the inference.
This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Haste Makes Waste: A Simple Approach for Scaling Graph Neural Networks [37.41604955004456]
Graph neural networks (GNNs) have demonstrated remarkable success in graph representation learning.
Various sampling approaches have been proposed to scale GNNs to applications with large-scale graphs.
arXiv Detail & Related papers (2024-10-07T18:29:02Z) - Enhanced Expressivity in Graph Neural Networks with Lanczos-Based Linear Constraints [7.605749412696919]
Graph Neural Networks (GNNs) excel in handling graph-structured data but often underperform in link prediction tasks.
We present a novel method to enhance the expressivity of GNNs by embedding induced subgraphs into the graph Laplacian matrix's eigenbasis.
Our method achieves 20x and 10x speedup by only requiring 5% and 10% data from the PubMed and OGBL-Vessel datasets.
arXiv Detail & Related papers (2024-08-22T12:22:00Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - SUREL+: Moving from Walks to Sets for Scalable Subgraph-based Graph
Representation Learning [30.77279782495266]
Subgraph-based graph representation learning (SGRL) has emerged as a powerful tool in many prediction tasks on graphs.
We propose a novel framework SUREL+ that upgrades SUREL by using node sets instead of walks to represent subgraphs.
arXiv Detail & Related papers (2023-03-06T18:58:13Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - SCARA: Scalable Graph Neural Networks with Feature-Oriented Optimization [23.609017952951454]
We propose SCARA, a scalable Graph Neural Network (GNN) with feature-oriented optimization for graph computation.
SCARA efficiently computes graph embedding from node features, and further selects and reuses feature results to reduce overhead.
It is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111M nodes, 1.6B edges) in 100 seconds.
arXiv Detail & Related papers (2022-07-19T10:32:11Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.