Communication-Efficient Graph Neural Networks with Probabilistic
Neighborhood Expansion Analysis and Caching
- URL: http://arxiv.org/abs/2305.03152v1
- Date: Thu, 4 May 2023 21:04:01 GMT
- Title: Communication-Efficient Graph Neural Networks with Probabilistic
Neighborhood Expansion Analysis and Caching
- Authors: Tim Kaler, Alexandros-Stavros Iliopoulos, Philip Murzynowski, Tao B.
Schardl, Charles E. Leiserson, Jie Chen
- Abstract summary: Training and inference with graph neural networks (GNNs) on massive graphs has been actively studied since the inception of GNNs.
This paper is concerned with minibatch training and inference with GNNs that employ node-wise sampling in distributed settings.
We present SALIENT++, which extends the prior state-of-the-art SALIENT system to work with partitioned feature data.
- Score: 59.8522166385372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training and inference with graph neural networks (GNNs) on massive graphs
has been actively studied since the inception of GNNs, owing to the widespread
use and success of GNNs in applications such as recommendation systems and
financial forensics. This paper is concerned with minibatch training and
inference with GNNs that employ node-wise sampling in distributed settings,
where the necessary partitioning of vertex features across distributed storage
causes feature communication to become a major bottleneck that hampers
scalability. To significantly reduce the communication volume without
compromising prediction accuracy, we propose a policy for caching data
associated with frequently accessed vertices in remote partitions. The proposed
policy is based on an analysis of vertex-wise inclusion probabilities (VIP)
during multi-hop neighborhood sampling, which may expand the neighborhood far
beyond the partition boundaries of the graph. VIP analysis not only enables the
elimination of the communication bottleneck, but it also offers a means to
organize in-memory data by prioritizing GPU storage for the most frequently
accessed vertex features. We present SALIENT++, which extends the prior
state-of-the-art SALIENT system to work with partitioned feature data and
leverages the VIP-driven caching policy. SALIENT++ retains the local training
efficiency and scalability of SALIENT by using a deep pipeline and drastically
reducing communication volume while consuming only a fraction of the storage
required by SALIENT. We provide experimental results with the Open Graph
Benchmark data sets and demonstrate that training a 3-layer GraphSAGE model
with SALIENT++ on 8 single-GPU machines is 7.1 faster than with SALIENT on 1
single-GPU machine, and 12.7 faster than with DistDGL on 8 single-GPU machines.
Related papers
- MassiveGNN: Efficient Training via Prefetching for Massively Connected Distributed Graphs [11.026326555186333]
This paper develops a parameterized continuous prefetch and eviction scheme on top of the state-of-the-art Amazon DistDGL distributed GNN framework.
It demonstrates about 15-40% improvement in end-to-end training performance on the National Energy Research Scientific Computing Center's (NERSC) Perlmutter supercomputer.
arXiv Detail & Related papers (2024-10-30T05:10:38Z) - LSM-GNN: Large-scale Storage-based Multi-GPU GNN Training by Optimizing Data Transfer Scheme [12.64360444043247]
Graph Neural Networks (GNNs) are widely used today in recommendation systems, fraud detection, and node/link classification tasks.
To address limited memory capacities, traditional GNN training approaches use graph partitioning and sharding techniques.
We propose Large-scale Storage-based Multi- GPU GNN framework (LSM-GNN)
LSM-GNN incorporates a hybrid eviction policy that intelligently manages cache space by using both static and dynamic node information.
arXiv Detail & Related papers (2024-07-21T20:41:39Z) - NeuraChip: Accelerating GNN Computations with a Hash-based Decoupled Spatial Accelerator [3.926150707772004]
We introduce NeuraChip, a novel GNN spatial accelerator based on Gustavson's algorithm.
NeuraChip decouples the multiplication and addition computations in sparse matrix multiplication.
We also present NeuraSim, an open-source, cycle-accurate, multi-threaded, modular simulator for comprehensive performance analysis.
arXiv Detail & Related papers (2024-04-23T20:51:09Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and
Preprocessing [0.0]
Graph neural networks (GNNs) have extended the success of deep neural networks (DNNs) to non-Euclidean graph data.
Existing systems are inefficient to train large graphs with billions of nodes and edges with GPUs.
This paper proposes BGL, a distributed GNN training system designed to address the bottlenecks with a few key ideas.
arXiv Detail & Related papers (2021-12-16T00:37:37Z) - Accelerating Training and Inference of Graph Neural Networks with Fast
Sampling and Pipelining [58.10436813430554]
Mini-batch training of graph neural networks (GNNs) requires a lot of computation and data movement.
We argue in favor of performing mini-batch training with neighborhood sampling in a distributed multi-GPU environment.
We present a sequence of improvements to mitigate these bottlenecks, including a performance-engineered neighborhood sampler.
We also conduct an empirical analysis that supports the use of sampling for inference, showing that test accuracies are not materially compromised.
arXiv Detail & Related papers (2021-10-16T02:41:35Z) - GNNIE: GNN Inference Engine with Load-balancing and Graph-Specific
Caching [2.654276707313136]
GNNIE is an accelerator designed to run a broad range of Graph Neural Networks (GNNs)
It tackles workload imbalance by (i) splitting node feature operands into blocks, (ii) reordering and redistributing computations, and (iii) using a flexible MAC architecture with low communication overheads among the processing elements.
GNNIE achieves average speedups of over 8890x over a CPU and 295x over a GPU over multiple datasets on graph attention networks (GATs), graph convolutional networks (GCNs), GraphSAGE, GINConv, and DiffPool.
arXiv Detail & Related papers (2021-05-21T20:07:14Z) - DistGNN: Scalable Distributed Training for Large-Scale Graph Neural
Networks [58.48833325238537]
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible.
In this paper, we presentGNN that optimize the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters.
Our results on four common GNN benchmark datasets show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets.
arXiv Detail & Related papers (2021-04-14T08:46:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.