NeutronStream: A Dynamic GNN Training Framework with Sliding Window for
Graph Streams
- URL: http://arxiv.org/abs/2312.02473v1
- Date: Tue, 5 Dec 2023 03:58:05 GMT
- Title: NeutronStream: A Dynamic GNN Training Framework with Sliding Window for
Graph Streams
- Authors: Chaoyi Chen, Dechao Gao, Yanfeng Zhang, Qiange Wang, Zhenbo Fu,
Xuecang Zhang, Junhua Zhu, Yu Gu, Ge Yu
- Abstract summary: NeutronStream is a framework for training dynamic Graph Neural Network (GNN) models.
It captures both the spatial and temporal dependencies of graph updates.
NeutronStream achieves speedups ranging from 1.48X to 5.87X and an average accuracy improvement of 3.97%.
- Score: 12.365456024506901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing Graph Neural Network (GNN) training frameworks have been designed to
help developers easily create performant GNN implementations. However, most
existing GNN frameworks assume that the input graphs are static, but ignore
that most real-world graphs are constantly evolving. Though many dynamic GNN
models have emerged to learn from evolving graphs, the training process of
these dynamic GNNs is dramatically different from traditional GNNs in that it
captures both the spatial and temporal dependencies of graph updates. This
poses new challenges for designing dynamic GNN training frameworks. First, the
traditional batched training method fails to capture real-time structural
evolution information. Second, the time-dependent nature makes parallel
training hard to design. Third, it lacks system supports for users to
efficiently implement dynamic GNNs. In this paper, we present NeutronStream, a
framework for training dynamic GNN models. NeutronStream abstracts the input
dynamic graph into a chronologically updated stream of events and processes the
stream with an optimized sliding window to incrementally capture the
spatial-temporal dependencies of events. Furthermore, NeutronStream provides a
parallel execution engine to tackle the sequential event processing challenge
to achieve high performance. NeutronStream also integrates a built-in graph
storage structure that supports dynamic updates and provides a set of
easy-to-use APIs that allow users to express their dynamic GNNs. Our
experimental results demonstrate that, compared to state-of-the-art dynamic GNN
implementations, NeutronStream achieves speedups ranging from 1.48X to 5.87X
and an average accuracy improvement of 3.97%.
Related papers
- D3-GNN: Dynamic Distributed Dataflow for Streaming Graph Neural Networks [2.3283463706065763]
Graph Neural Network (GNN) models on streaming graphs entail algorithmic challenges to continuously capture its dynamic state.
We present D3-GNN, the first distributed, hybrid-parallel, streaming GNN system designed to handle real-time graph updates under online query setting.
arXiv Detail & Related papers (2024-09-10T11:00:43Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - GNNFlow: A Distributed Framework for Continuous Temporal GNN Learning on
Dynamic Graphs [11.302970701867844]
We introduce GNNFlow, a distributed framework for efficient continuous temporal graph representation learning.
GNNFlow supports distributed training across multiple machines with static scheduling to ensure load balance.
Our experimental results show that GNNFlow provides up to 21.1x faster continuous learning than existing systems.
arXiv Detail & Related papers (2023-11-29T07:30:32Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - DGC: Training Dynamic Graphs with Spatio-Temporal Non-Uniformity using
Graph Partitioning by Chunks [13.279145021338534]
Dynamic Graph Neural Network (DGNN) has shown a strong capability of learning dynamic graphs by exploiting both spatial and temporal features.
We propose DGC, a distributed DGNN training system that achieves a 1.25x - 7.52x speedup over the state-of-the-art in our testbed.
arXiv Detail & Related papers (2023-09-07T07:12:59Z) - Communication-Free Distributed GNN Training with Vertex Cut [63.22674903170953]
CoFree-GNN is a novel distributed GNN training framework that significantly speeds up the training process by implementing communication-free training.
We demonstrate that CoFree-GNN speeds up the GNN training process by up to 10 times over the existing state-of-the-art GNN training approaches.
arXiv Detail & Related papers (2023-08-06T21:04:58Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - Characterizing the Efficiency of Graph Neural Network Frameworks with a
Magnifying Glass [10.839902229218577]
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks.
Recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs.
It is unknown how much the frameworks are 'eco-friendly' from a green computing perspective.
arXiv Detail & Related papers (2022-11-06T04:22:19Z) - ROLAND: Graph Learning Framework for Dynamic Graphs [75.96510058864463]
Graph Neural Networks (GNNs) have been successfully applied to many real-world static graphs.
Existing dynamic GNNs do not incorporate state-of-the-art designs from static GNNs.
We propose ROLAND, an effective graph representation learning framework for real-world dynamic graphs.
arXiv Detail & Related papers (2022-08-15T14:51:47Z) - Instant Graph Neural Networks for Dynamic Graphs [18.916632816065935]
We propose Instant Graph Neural Network (InstantGNN), an incremental approach for the graph representation matrix of dynamic graphs.
Our method avoids time-consuming, repetitive computations and allows instant updates on the representation and instant predictions.
Our model achieves state-of-the-art accuracy while having orders-of-magnitude higher efficiency than existing methods.
arXiv Detail & Related papers (2022-06-03T03:27:42Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.