Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and
GPU
- URL: http://arxiv.org/abs/2210.03900v2
- Date: Thu, 13 Apr 2023 22:00:47 GMT
- Title: Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and
GPU
- Authors: Hanqiu Chen, Yahya Alhinai, Yihan Jiang, Eunjee Na, Cong Hao
- Abstract summary: Dynamic graph neural network (DGNN) is becoming increasingly popular because of its widespread use in capturing dynamic features in the real world.
deploying DGNNs on hardware presents additional challenges due to the model complexity, diversity, and the nature of the time dependency.
We select eight prevailing DGNNs with different characteristics and profile them on both CPU and GPU.
- Score: 3.4214598355901638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic graph neural network (DGNN) is becoming increasingly popular because
of its widespread use in capturing dynamic features in the real world. A
variety of dynamic graph neural networks designed from algorithmic perspectives
have succeeded in incorporating temporal information into graph processing.
Despite the promising algorithmic performance, deploying DGNNs on hardware
presents additional challenges due to the model complexity, diversity, and the
nature of the time dependency. Meanwhile, the differences between DGNNs and
static graph neural networks make hardware-related optimizations for static
graph neural networks unsuitable for DGNNs. In this paper, we select eight
prevailing DGNNs with different characteristics and profile them on both CPU
and GPU. The profiling results are summarized and analyzed, providing in-depth
insights into the bottlenecks of DGNNs on hardware and identifying potential
optimization opportunities for future DGNN acceleration. Followed by a
comprehensive survey, we provide a detailed analysis of DGNN performance
bottlenecks on hardware, including temporal data dependency, workload
imbalance, data movement, and GPU warm-up. We suggest several optimizations
from both software and hardware perspectives. This paper is the first to
provide an in-depth analysis of the hardware performance of DGNN Code is
available at https://github.com/sharc-lab/DGNN_analysis.
Related papers
- Characterizing and Understanding HGNN Training on GPUs [9.579848162902628]
Heterogeneous Graph Neural Networks (HGNNs) have been widely adopted in many real-world domains such as recommendation systems and medical analysis.
To enhance the efficiency of HGNN training, it is essential to characterize and analyze the execution semantics and patterns within the training process to identify performance bottlenecks.
arXiv Detail & Related papers (2024-07-16T14:45:46Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Enabling Accelerators for Graph Computing [0.0]
Graph Neural Networks (GNNs) offer a novel paradigm for learning on graph-structured data.
GNNs present new computational challenges compared to conventional neural networks.
This thesis aims to develop a better understanding of how GNNs interact with the underlying hardware.
arXiv Detail & Related papers (2023-12-16T23:31:20Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Characterizing the Efficiency of Graph Neural Network Frameworks with a
Magnifying Glass [10.839902229218577]
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks.
Recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs.
It is unknown how much the frameworks are 'eco-friendly' from a green computing perspective.
arXiv Detail & Related papers (2022-11-06T04:22:19Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural
Networks [51.42338058718487]
Graph Neural Networks (GNNs) have received extensive research attention for their promising performance in graph machine learning.
Existing approaches, such as GCN and GPRGNN, are not robust in the face of homophily changes on test graphs.
We propose EvenNet, a spectral GNN corresponding to an even-polynomial graph filter.
arXiv Detail & Related papers (2022-05-27T10:48:14Z) - Characterizing and Understanding Distributed GNN Training on GPUs [2.306379679349986]
Graph neural network (GNN) has been demonstrated to be a powerful model in many domains for its effectiveness in learning over graphs.
To scale GNN training for large graphs, a widely adopted approach is distributed training which accelerates training using multiple computing nodes.
arXiv Detail & Related papers (2022-04-18T03:47:28Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Computing Graph Neural Networks: A Survey from Algorithms to
Accelerators [2.491032752533246]
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data.
This paper aims to make two main contributions: a review of the field of GNNs is presented from the perspective of computing.
An in-depth analysis of current software and hardware acceleration schemes is provided.
arXiv Detail & Related papers (2020-09-30T22:29:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.