GNN at the Edge: Cost-Efficient Graph Neural Network Processing over
Distributed Edge Servers
- URL: http://arxiv.org/abs/2210.17281v1
- Date: Mon, 31 Oct 2022 13:03:16 GMT
- Title: GNN at the Edge: Cost-Efficient Graph Neural Network Processing over
Distributed Edge Servers
- Authors: Liekang Zeng, Chongyu Yang, Peng Huang, Zhi Zhou, Shuai Yu, Xu Chen
- Abstract summary: Graph Neural Networks (GNNs) are still under exploration, presenting a stark disparity to its broad edge adoptions.
This paper studies the cost optimization for distributed GNN processing over a multi-tier heterogeneous edge network.
We show that our approach achieves superior performance over de facto baselines with more than 95.8% cost eduction in a fast convergence speed.
- Score: 24.109721494781592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge intelligence has arisen as a promising computing paradigm for supporting
miscellaneous smart applications that rely on machine learning techniques.
While the community has extensively investigated multi-tier edge deployment for
traditional deep learning models (e.g. CNNs, RNNs), the emerging Graph Neural
Networks (GNNs) are still under exploration, presenting a stark disparity to
its broad edge adoptions such as traffic flow forecasting and location-based
social recommendation. To bridge this gap, this paper formally studies the cost
optimization for distributed GNN processing over a multi-tier heterogeneous
edge network. We build a comprehensive modeling framework that can capture a
variety of different cost factors, based on which we formulate a cost-efficient
graph layout optimization problem that is proved to be NP-hard. Instead of
trivially applying traditional data placement wisdom, we theoretically reveal
the structural property of quadratic submodularity implicated in GNN's unique
computing pattern, which motivates our design of an efficient iterative
solution exploiting graph cuts. Rigorous analysis shows that it provides
parameterized constant approximation ratio, guaranteed convergence, and exact
feasibility. To tackle potential graph topological evolution in GNN processing,
we further devise an incremental update strategy and an adaptive scheduling
algorithm for lightweight dynamic layout optimization. Evaluations with
real-world datasets and various GNN benchmarks demonstrate that our approach
achieves superior performance over de facto baselines with more than 95.8% cost
eduction in a fast convergence speed.
Related papers
- Pre-Training Identification of Graph Winning Tickets in Adaptive Spatial-Temporal Graph Neural Networks [5.514795777097036]
We introduce the concept of the Graph Winning Ticket (GWT), derived from the Lottery Ticket Hypothesis (LTH)
By adopting a pre-determined star topology as a GWT prior to training, we balance edge reduction with efficient information propagation.
Our approach enables training ASTGNNs on the largest scale spatial-temporal dataset using a single A6000 equipped with 48 GB of memory.
arXiv Detail & Related papers (2024-06-12T14:53:23Z) - Scalable Resource Management for Dynamic MEC: An Unsupervised
Link-Output Graph Neural Network Approach [36.32772317151467]
Deep learning has been successfully adopted in mobile edge computing (MEC) to optimize task offloading and resource allocation.
The dynamics of edge networks raise two challenges in neural network (NN)-based optimization methods: low scalability and high training costs.
In this paper, a novel link-output GNN (LOGNN)-based resource management approach is proposed to flexibly optimize the resource allocation in MEC.
arXiv Detail & Related papers (2023-06-15T08:21:41Z) - Learning Cooperative Beamforming with Edge-Update Empowered Graph Neural
Networks [29.23937571816269]
We propose an edge-graph-neural-network (Edge-GNN) to learn the cooperative beamforming on the graph edges.
The proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-23T02:05:06Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Graph Neural Network Based Node Deployment for Throughput Enhancement [20.56966053013759]
We propose a novel graph neural network (GNN) method for the network node deployment problem.
We show that an expressive GNN has the capacity to approximate both the function value and the traffic permutation, as a theoretic support for the proposed method.
arXiv Detail & Related papers (2022-08-19T08:06:28Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Towards Understanding Graph Neural Networks: An Algorithm Unrolling
Perspective [9.426760895586428]
We introduce a class of unrolled networks built on truncated optimization algorithms for graph signal denoising problems.
The training process of a GNN model can be seen as solving a bilevel optimization problem with a GSD problem at the lower level.
An expressive model named UGDGNN, i.e., unrolled gradient descent GNN, is proposed which inherits appealing theoretical properties.
arXiv Detail & Related papers (2022-06-09T12:54:03Z) - Training Robust Graph Neural Networks with Topology Adaptive Edge
Dropping [116.26579152942162]
Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.
Despite their success, GNNs suffer from sub-optimal generalization performance given limited training data.
This paper proposes Topology Adaptive Edge Dropping to improve generalization performance and learn robust GNN models.
arXiv Detail & Related papers (2021-06-05T13:20:36Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.