Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving
with Workload Awareness
- URL: http://arxiv.org/abs/2305.10863v1
- Date: Thu, 18 May 2023 10:34:23 GMT
- Title: Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving
with Workload Awareness
- Authors: Zeyuan Tan, Xiulong Yuan, Congjie He, Man-Kit Sit, Guo Li, Xiaoze Liu,
Baole Ai, Kai Zeng, Peter Pietzuch, Luo Mai
- Abstract summary: Quiver is a distributed GPU-based GNN serving system with low-latency and high- throughput.
We show that Quiver achieves up to 35 times lower latency with an 8 times higher throughput compared to state-of-the-art GNN approaches.
- Score: 4.8412870364335925
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Systems for serving inference requests on graph neural networks (GNN) must
combine low latency with high throughout, but they face irregular computation
due to skew in the number of sampled graph nodes and aggregated GNN features.
This makes it challenging to exploit GPUs effectively: using GPUs to sample
only a few graph nodes yields lower performance than CPU-based sampling; and
aggregating many features exhibits high data movement costs between GPUs and
CPUs. Therefore, current GNN serving systems use CPUs for graph sampling and
feature aggregation, limiting throughput.
We describe Quiver, a distributed GPU-based GNN serving system with
low-latency and high-throughput. Quiver's key idea is to exploit workload
metrics for predicting the irregular computation of GNN requests, and governing
the use of GPUs for graph sampling and feature aggregation: (1) for graph
sampling, Quiver calculates the probabilistic sampled graph size, a metric that
predicts the degree of parallelism in graph sampling. Quiver uses this metric
to assign sampling tasks to GPUs only when the performance gains surpass
CPU-based sampling; and (2) for feature aggregation, Quiver relies on the
feature access probability to decide which features to partition and replicate
across a distributed GPU NUMA topology. We show that Quiver achieves up to 35
times lower latency with an 8 times higher throughput compared to
state-of-the-art GNN approaches (DGL and PyG).
Related papers
- Distributed Matrix-Based Sampling for Graph Neural Network Training [0.0]
We propose a matrix-based bulk sampling approach that expresses sampling as a sparse matrix multiplication (SpGEMM) and samples multiple minibatches at once.
When the input graph topology does not fit on a single device, our method distributes the graph and use communication-avoiding SpGEMM algorithms to scale GNN minibatch sampling.
In addition to new methods for sampling, we introduce a pipeline that uses our matrix-based bulk sampling approach to provide end-to-end training results.
arXiv Detail & Related papers (2023-11-06T06:40:43Z) - BatchGNN: Efficient CPU-Based Distributed GNN Training on Very Large
Graphs [2.984386665258243]
BatchGNN is a distributed CPU system that showcases techniques to efficiently train GNNs on terabyte-sized graphs.
BatchGNN achieves an average $3times$ speedup over DistDGL on three GNN models trained on OGBN graphs.
arXiv Detail & Related papers (2023-06-23T23:25:34Z) - Communication-Efficient Graph Neural Networks with Probabilistic
Neighborhood Expansion Analysis and Caching [59.8522166385372]
Training and inference with graph neural networks (GNNs) on massive graphs has been actively studied since the inception of GNNs.
This paper is concerned with minibatch training and inference with GNNs that employ node-wise sampling in distributed settings.
We present SALIENT++, which extends the prior state-of-the-art SALIENT system to work with partitioned feature data.
arXiv Detail & Related papers (2023-05-04T21:04:01Z) - BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and
Preprocessing [0.0]
Graph neural networks (GNNs) have extended the success of deep neural networks (DNNs) to non-Euclidean graph data.
Existing systems are inefficient to train large graphs with billions of nodes and edges with GPUs.
This paper proposes BGL, a distributed GNN training system designed to address the bottlenecks with a few key ideas.
arXiv Detail & Related papers (2021-12-16T00:37:37Z) - Accelerating Training and Inference of Graph Neural Networks with Fast
Sampling and Pipelining [58.10436813430554]
Mini-batch training of graph neural networks (GNNs) requires a lot of computation and data movement.
We argue in favor of performing mini-batch training with neighborhood sampling in a distributed multi-GPU environment.
We present a sequence of improvements to mitigate these bottlenecks, including a performance-engineered neighborhood sampler.
We also conduct an empirical analysis that supports the use of sampling for inference, showing that test accuracies are not materially compromised.
arXiv Detail & Related papers (2021-10-16T02:41:35Z) - Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs [26.074384252289384]
Graph neural networks (GNNs) are powerful tools for learning from graph data and are widely used in various applications.
Despite a number of sampling-based methods have been proposed to enable mini-batch training on large graphs, these methods have not been proved to work on truly industry-scale graphs.
We propose Global Neighborhood Sampling that aims at training GNNs on giant graphs specifically for mixed- CPU-GPU training.
arXiv Detail & Related papers (2021-06-11T03:30:25Z) - VersaGNN: a Versatile accelerator for Graph neural networks [81.1667080640009]
We propose textitVersaGNN, an ultra-efficient, systolic-array-based versatile hardware accelerator.
textitVersaGNN achieves on average 3712$times$ speedup with 1301.25$times$ energy reduction on CPU, and 35.4$times$ speedup with 17.66$times$ energy reduction on GPU.
arXiv Detail & Related papers (2021-05-04T04:10:48Z) - DistGNN: Scalable Distributed Training for Large-Scale Graph Neural
Networks [58.48833325238537]
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible.
In this paper, we presentGNN that optimize the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters.
Our results on four common GNN benchmark datasets show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets.
arXiv Detail & Related papers (2021-04-14T08:46:35Z) - Accelerating Graph Sampling for Graph Machine Learning using GPUs [2.9383911860380127]
NextDoor is a system designed to perform graph sampling on GPU resources.
NextDoor employs a new approach to graph sampling that we call transit-parallelism.
We implement several graph sampling applications, and show that NextDoor runs them orders of magnitude faster than existing systems.
arXiv Detail & Related papers (2020-09-14T19:03:33Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.