DGC: Training Dynamic Graphs with Spatio-Temporal Non-Uniformity using
Graph Partitioning by Chunks
- URL: http://arxiv.org/abs/2309.03523v1
- Date: Thu, 7 Sep 2023 07:12:59 GMT
- Title: DGC: Training Dynamic Graphs with Spatio-Temporal Non-Uniformity using
Graph Partitioning by Chunks
- Authors: Fahao Chen, Peng Li, Celimuge Wu
- Abstract summary: Dynamic Graph Neural Network (DGNN) has shown a strong capability of learning dynamic graphs by exploiting both spatial and temporal features.
We propose DGC, a distributed DGNN training system that achieves a 1.25x - 7.52x speedup over the state-of-the-art in our testbed.
- Score: 13.279145021338534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Graph Neural Network (DGNN) has shown a strong capability of learning
dynamic graphs by exploiting both spatial and temporal features. Although DGNN
has recently received considerable attention by AI community and various DGNN
models have been proposed, building a distributed system for efficient DGNN
training is still challenging. It has been well recognized that how to
partition the dynamic graph and assign workloads to multiple GPUs plays a
critical role in training acceleration. Existing works partition a dynamic
graph into snapshots or temporal sequences, which only work well when the graph
has uniform spatio-temporal structures. However, dynamic graphs in practice are
not uniformly structured, with some snapshots being very dense while others are
sparse. To address this issue, we propose DGC, a distributed DGNN training
system that achieves a 1.25x - 7.52x speedup over the state-of-the-art in our
testbed. DGC's success stems from a new graph partitioning method that
partitions dynamic graphs into chunks, which are essentially subgraphs with
modest training workloads and few inter connections. This partitioning
algorithm is based on graph coarsening, which can run very fast on large
graphs. In addition, DGC has a highly efficient run-time, powered by the
proposed chunk fusion and adaptive stale aggregation techniques. Extensive
experimental results on 3 typical DGNN models and 4 popular dynamic graph
datasets are presented to show the effectiveness of DGC.
Related papers
- RobGC: Towards Robust Graph Condensation [61.259453496191696]
Graph neural networks (GNNs) have attracted widespread attention for their impressive capability of graph representation learning.
However, the increasing prevalence of large-scale graphs presents a significant challenge for GNN training due to their computational demands.
We propose graph condensation (GC) to generate an informative compact graph that enables efficient training of GNNs while retaining performance.
arXiv Detail & Related papers (2024-06-19T04:14:57Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - GNNFlow: A Distributed Framework for Continuous Temporal GNN Learning on
Dynamic Graphs [11.302970701867844]
We introduce GNNFlow, a distributed framework for efficient continuous temporal graph representation learning.
GNNFlow supports distributed training across multiple machines with static scheduling to ensure load balance.
Our experimental results show that GNNFlow provides up to 21.1x faster continuous learning than existing systems.
arXiv Detail & Related papers (2023-11-29T07:30:32Z) - Communication-Free Distributed GNN Training with Vertex Cut [63.22674903170953]
CoFree-GNN is a novel distributed GNN training framework that significantly speeds up the training process by implementing communication-free training.
We demonstrate that CoFree-GNN speeds up the GNN training process by up to 10 times over the existing state-of-the-art GNN training approaches.
arXiv Detail & Related papers (2023-08-06T21:04:58Z) - Decoupled Graph Neural Networks for Large Dynamic Graphs [14.635923016087503]
We propose a decoupled graph neural network for large dynamic graphs.
We show that our algorithm achieves state-of-the-art performance in both kinds of dynamic graphs.
arXiv Detail & Related papers (2023-05-14T23:00:10Z) - Dynamic Graph Node Classification via Time Augmentation [15.580277876084873]
We propose the Time Augmented Graph Dynamic Neural Network (TADGNN) framework for node classification on dynamic graphs.
TADGNN consists of two modules: 1) a time augmentation module that captures the temporal evolution of nodes across time structurally, creating a time-augmentedtemporal graph, and 2) an information propagation module that learns the dynamic representations for each node across time using the constructed time-augmented graph.
Experimental results demonstrate that TADGNN framework outperforms several static and dynamic state-of-the-art (SOTA) GNN models while demonstrating superior scalability.
arXiv Detail & Related papers (2022-12-07T04:13:23Z) - Instant Graph Neural Networks for Dynamic Graphs [18.916632816065935]
We propose Instant Graph Neural Network (InstantGNN), an incremental approach for the graph representation matrix of dynamic graphs.
Our method avoids time-consuming, repetitive computations and allows instant updates on the representation and instant predictions.
Our model achieves state-of-the-art accuracy while having orders-of-magnitude higher efficiency than existing methods.
arXiv Detail & Related papers (2022-06-03T03:27:42Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Efficient Dynamic Graph Representation Learning at Scale [66.62859857734104]
We propose Efficient Dynamic Graph lEarning (EDGE), which selectively expresses certain temporal dependency via training loss to improve the parallelism in computations.
We show that EDGE can scale to dynamic graphs with millions of nodes and hundreds of millions of temporal events and achieve new state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2021-12-14T22:24:53Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - K-Core based Temporal Graph Convolutional Network for Dynamic Graphs [19.237377882738063]
We propose a novel k-core based temporal graph convolutional network, the CTGCN, to learn node representations for dynamic graphs.
In contrast to previous dynamic graph embedding methods, CTGCN can preserve both local connective proximity and global structural similarity.
Experimental results on 7 real-world graphs demonstrate that the CTGCN outperforms existing state-of-the-art graph embedding methods in several tasks.
arXiv Detail & Related papers (2020-03-22T14:15:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.