Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized
Devices
- URL: http://arxiv.org/abs/2303.00492v3
- Date: Sat, 17 Feb 2024 03:35:33 GMT
- Title: Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized
Devices
- Authors: Qiying Pan, Yifei Zhu, Lingyang Chu
- Abstract summary: Graph neural networks (GNNs) have been widely deployed in real-world networked applications and systems.
We propose the first federated GNN framework called Lumos that supports supervised and unsupervised learning.
Based on the constructed tree for each client, a decentralized tree-based GNN trainer is proposed to support versatile training.
- Score: 19.27111697495379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNN) have been widely deployed in real-world networked
applications and systems due to their capability to handle graph-structured
data. However, the growing awareness of data privacy severely challenges the
traditional centralized model training paradigm, where a server holds all the
graph information. Federated learning is an emerging collaborative computing
paradigm that allows model training without data centralization. Existing
federated GNN studies mainly focus on systems where clients hold distinctive
graphs or sub-graphs. The practical node-level federated situation, where each
client is only aware of its direct neighbors, has yet to be studied. In this
paper, we propose the first federated GNN framework called Lumos that supports
supervised and unsupervised learning with feature and degree protection on
node-level federated graphs. We first design a tree constructor to improve the
representation capability given the limited structural information. We further
present a Monte Carlo Markov Chain-based algorithm to mitigate the workload
imbalance caused by degree heterogeneity with theoretically-guaranteed
performance. Based on the constructed tree for each client, a decentralized
tree-based GNN trainer is proposed to support versatile training. Extensive
experiments demonstrate that Lumos outperforms the baseline with significantly
higher accuracy and greatly reduced communication cost and training time.
Related papers
- FedGraph: A Research Library and Benchmark for Federated Graph Learning [40.257355007504074]
We introduce FedGraph, a research library built for practical distributed deployment and benchmarking in federated graph learning.
FedGraph supports a range of state-of-the-art graph learning methods and includes built-in profiling tools to evaluate system performance.
We demonstrate the first privacy-preserving federated learning system to run on graphs with 100 million nodes.
arXiv Detail & Related papers (2024-10-08T20:18:18Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Cooperative Network Learning for Large-Scale and Decentralized Graphs [7.628975821850447]
We introduce a Cooperative Network Learning (CNL) framework to ensure secure graph computing for various graph tasks.
CNL unifies the local and global perspectives of GNN computing with distributed data for an agency.
We hope this framework will address privacy concerns in graph-related research and integrate decentralized graph data structures.
arXiv Detail & Related papers (2023-11-03T02:56:01Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Privatized Graph Federated Learning [57.14673504239551]
We introduce graph federated learning, which consists of multiple units connected by a graph.
We show how graph homomorphic perturbations can be used to ensure the algorithm is differentially private.
arXiv Detail & Related papers (2022-03-14T13:48:23Z) - Privacy-Preserving Graph Neural Network Training and Inference as a
Cloud Service [15.939214141337803]
SecGNN is built from a synergy of insights on lightweight cryptography and machine learning techniques.
We show that SecGNN achieves comparable training and inference accuracy, with practically affordable performance.
arXiv Detail & Related papers (2022-02-16T02:57:10Z) - ROD: Reception-aware Online Distillation for Sparse Graphs [23.55530524584572]
We propose ROD, a novel reception-aware online knowledge distillation approach for sparse graph learning.
We design three supervision signals for ROD: multi-scale reception-aware graph knowledge, task-based supervision, and rich distilled knowledge.
Our approach has been extensively evaluated on 9 datasets and a variety of graph-based tasks.
arXiv Detail & Related papers (2021-07-25T11:55:47Z) - SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural
Networks [13.965982814292971]
Graph Neural Networks (GNNs) are the first choice methods for graph machine learning problems.
Centralizing a massive amount of real-world graph data for GNN training is prohibitive due to user-side privacy concerns.
This work proposes SpreadGNN, a novel multi-task federated training framework.
arXiv Detail & Related papers (2021-06-04T22:20:47Z) - Graph Information Bottleneck [77.21967740646784]
Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
arXiv Detail & Related papers (2020-10-24T07:13:00Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.