AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels
on GPUs
- URL: http://arxiv.org/abs/2305.17408v1
- Date: Sat, 27 May 2023 08:22:12 GMT
- Title: AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels
on GPUs
- Authors: Yangjie Zhou, Yaoxu Song, Jingwen Leng, Zihan Liu, Weihao Cui,
Zhendong Zhang, Cong Guo, Quan Chen, Li Li, Minyi Guo
- Abstract summary: Graph neural networks (GNNs) are powerful tools for exploring and learning from graph structures and features.
Prior works have proposed to explore the sparsity in the input graph to accelerate GNNs, which uses the full-graph-level or block-level sparsity format.
We show that they fail to balance the sparsity benefit and kernel execution efficiency.
We propose a novel system, referred to as AdaptGear, that addresses the challenge of optimizing GNNs performance.
- Score: 26.607519045805745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are powerful tools for exploring and learning
from graph structures and features. As such, achieving high-performance
execution for GNNs becomes crucially important. Prior works have proposed to
explore the sparsity (i.e., low density) in the input graph to accelerate GNNs,
which uses the full-graph-level or block-level sparsity format. We show that
they fail to balance the sparsity benefit and kernel execution efficiency. In
this paper, we propose a novel system, referred to as AdaptGear, that addresses
the challenge of optimizing GNNs performance by leveraging kernels tailored to
the density characteristics at the subgraph level. Meanwhile, we also propose a
method that dynamically chooses the optimal set of kernels for a given input
graph. Our evaluation shows that AdaptGear can achieve a significant
performance improvement, up to $6.49 \times$ ($1.87 \times$ on average), over
the state-of-the-art works on two mainstream NVIDIA GPUs across various
datasets.
Related papers
- Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning [33.948899558876604]
This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs.
We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction.
arXiv Detail & Related papers (2024-10-08T05:27:34Z) - SiHGNN: Leveraging Properties of Semantic Graphs for Efficient HGNN Acceleration [9.85638913900595]
Heterogeneous Graph Neural Networks (HGNNs) have expanded graph representation learning to heterogeneous graph fields.
Recent studies have demonstrated their superior performance across various applications, including medical analysis and recommendation systems.
We propose a lightweight hardware accelerator for HGNNs, called SiHGNN. This accelerator incorporates a tree-based Semantic Graph Builder for efficient semantic graph generation and features a novel Graph Restructurer for optimizing semantic graph layouts.
arXiv Detail & Related papers (2024-08-27T14:20:21Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - MAG-GNN: Reinforcement Learning Boosted Graph Neural Network [68.60884768323739]
A particular line of work proposed subgraph GNNs that use subgraph information to improve GNNs' expressivity and achieved great success.
Such effectivity sacrifices the efficiency of GNNs by enumerating all possible subgraphs.
We propose Magnetic Graph Neural Network (MAG-GNN), a reinforcement learning (RL) boosted GNN, to solve the problem.
arXiv Detail & Related papers (2023-10-29T20:32:21Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.