gSuite: A Flexible and Framework Independent Benchmark Suite for Graph
Neural Network Inference on GPUs
- URL: http://arxiv.org/abs/2210.11601v1
- Date: Thu, 20 Oct 2022 21:18:51 GMT
- Title: gSuite: A Flexible and Framework Independent Benchmark Suite for Graph
Neural Network Inference on GPUs
- Authors: Taha Tekdo\u{g}an, Serkan G\"okta\c{s}, Ayse Yilmazer-Metin
- Abstract summary: We develop a benchmark suite that is framework independent, supporting versatile computational models, easily and can be used with architectural simulators without additional effort.
gSuite enables performing detailed performance characterization studies on GNN Inference using both contemporary GPU profilers and architectural GPU simulators.
We use several evaluation metrics to rigorously measure the performance of GNN computation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the interest to Graph Neural Networks (GNNs) is growing, the importance of
benchmarking and performance characterization studies of GNNs is increasing. So
far, we have seen many studies that investigate and present the performance and
computational efficiency of GNNs. However, the work done so far has been
carried out using a few high-level GNN frameworks. Although these frameworks
provide ease of use, they contain too many dependencies to other existing
libraries. The layers of implementation details and the dependencies complicate
the performance analysis of GNN models that are built on top of these
frameworks, especially while using architectural simulators. Furthermore,
different approaches on GNN computation are generally overlooked in prior
characterization studies, and merely one of the common computational models is
evaluated. Based on these shortcomings and needs that we observed, we developed
a benchmark suite that is framework independent, supporting versatile
computational models, easily configurable and can be used with architectural
simulators without additional effort.
Our benchmark suite, which we call gSuite, makes use of only hardware
vendor's libraries and therefore it is independent of any other frameworks.
gSuite enables performing detailed performance characterization studies on GNN
Inference using both contemporary GPU profilers and architectural GPU
simulators. To illustrate the benefits of our new benchmark suite, we perform a
detailed characterization study with a set of well-known GNN models with
various datasets; running gSuite both on a real GPU card and a timing-detailed
GPU simulator. We also implicate the effect of computational models on
performance. We use several evaluation metrics to rigorously measure the
performance of GNN computation.
Related papers
- UGSL: A Unified Framework for Benchmarking Graph Structure Learning [19.936173198345053]
We propose a benchmarking strategy for graph structure learning using a unified framework.
Our framework, called Unified Graph Structure Learning (UGSL), reformulates existing models into a single model.
Our results provide a clear and concise understanding of the different methods in this area as well as their strengths and weaknesses.
arXiv Detail & Related papers (2023-08-21T14:05:21Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Characterizing the Efficiency of Graph Neural Network Frameworks with a
Magnifying Glass [10.839902229218577]
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks.
Recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs.
It is unknown how much the frameworks are 'eco-friendly' from a green computing perspective.
arXiv Detail & Related papers (2022-11-06T04:22:19Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search [55.75621026447599]
We propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS.
Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures.
Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation.
arXiv Detail & Related papers (2022-06-18T10:17:15Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs [21.63854538768414]
We propose TC-GNN, the first GNN framework based on GPU Core Units (TCUs)
The core idea is to reconcile the "Sparse" GNN with the high-performance "Dense" TCUs.
Rigorous experiments show an average of 1.70 speedup over the state-of-the-art DGL framework.
arXiv Detail & Related papers (2021-12-03T18:06:23Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.