Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT
Services
- URL: http://arxiv.org/abs/2307.01684v1
- Date: Tue, 4 Jul 2023 12:30:01 GMT
- Title: Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT
Services
- Authors: Liekang Zeng, Xu Chen, Peng Huang, Ke Luo, Xiaoxi Zhang, Zhi Zhou
- Abstract summary: Graph Neural Networks (GNNs) have gained growing interest in miscellaneous applications owing to their outstanding ability in extracting latent representation on graph structures.
We present Fograph, a novel distributed real-time GNN inference framework that leverages diverse and dynamic resources of multiple fog nodes in proximity to IoT data sources.
Prototype-based evaluation and case study demonstrate that Fograph significantly outperforms the state-of-the-art cloud serving and fog deployment by up to 5.39x execution speedup and 6.84x throughput improvement.
- Score: 23.408109000977987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have gained growing interest in miscellaneous
applications owing to their outstanding ability in extracting latent
representation on graph structures. To render GNN-based service for IoT-driven
smart applications, traditional model serving paradigms usually resort to the
cloud by fully uploading geo-distributed input data to remote datacenters.
However, our empirical measurements reveal the significant communication
overhead of such cloud-based serving and highlight the profound potential in
applying the emerging fog computing. To maximize the architectural benefits
brought by fog computing, in this paper, we present Fograph, a novel
distributed real-time GNN inference framework that leverages diverse and
dynamic resources of multiple fog nodes in proximity to IoT data sources. By
introducing heterogeneity-aware execution planning and GNN-specific compression
techniques, Fograph tailors its design to well accommodate the unique
characteristics of GNN serving in fog environments. Prototype-based evaluation
and case study demonstrate that Fograph significantly outperforms the
state-of-the-art cloud serving and fog deployment by up to 5.39x execution
speedup and 6.84x throughput improvement.
Related papers
- MassiveGNN: Efficient Training via Prefetching for Massively Connected Distributed Graphs [11.026326555186333]
This paper develops a parameterized continuous prefetch and eviction scheme on top of the state-of-the-art Amazon DistDGL distributed GNN framework.
It demonstrates about 15-40% improvement in end-to-end training performance on the National Energy Research Scientific Computing Center's (NERSC) Perlmutter supercomputer.
arXiv Detail & Related papers (2024-10-30T05:10:38Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - Explainable Spatio-Temporal Graph Neural Networks [16.313146933922752]
We propose an Explainable Spatio-Temporal Graph Neural Networks (STGNN) framework that enhances STGNNs with inherent explainability.
Our framework integrates a unified-temporal graph attention network with a positional information fusion layer as the STG encoder and decoder.
We demonstrate that STExplainer outperforms state-of-the-art baselines in terms of predictive accuracy and explainability metrics.
arXiv Detail & Related papers (2023-10-26T04:47:28Z) - Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized
Devices [19.27111697495379]
Graph neural networks (GNNs) have been widely deployed in real-world networked applications and systems.
We propose the first federated GNN framework called Lumos that supports supervised and unsupervised learning.
Based on the constructed tree for each client, a decentralized tree-based GNN trainer is proposed to support versatile training.
arXiv Detail & Related papers (2023-03-01T13:27:06Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Spiking Graph Convolutional Networks [19.36064180392385]
SpikingGCN is an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs.
We show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis.
arXiv Detail & Related papers (2022-05-05T16:44:36Z) - Privacy-Preserving Graph Neural Network Training and Inference as a
Cloud Service [15.939214141337803]
SecGNN is built from a synergy of insights on lightweight cryptography and machine learning techniques.
We show that SecGNN achieves comparable training and inference accuracy, with practically affordable performance.
arXiv Detail & Related papers (2022-02-16T02:57:10Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Multi-hop Attention Graph Neural Network [70.21119504298078]
Multi-hop Attention Graph Neural Network (MAGNA) is a principled way to incorporate multi-hop context information into every layer of attention computation.
We show that MAGNA captures large-scale structural information in every layer, and has a low-pass effect that eliminates noisy high-frequency information from graph data.
arXiv Detail & Related papers (2020-09-29T22:41:19Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.