HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning
- URL: http://arxiv.org/abs/2406.10616v1
- Date: Sat, 15 Jun 2024 12:34:40 GMT
- Title: HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning
- Authors: Zhuoning Guo, Duanyi Yao, Qiang Yang, Hao Liu,
- Abstract summary: Federated Graph Learning (FGL) has emerged as a promising way to learn high-quality representations from distributed graph data.
We propose a Hierarchical Federated Graph Learning framework for cross-silo cross-device FGL.
Specifically, we devise a unified hierarchical architecture to safeguard federated GNN training on heterogeneous clients.
- Score: 12.073150043485084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Graph Learning (FGL) has emerged as a promising way to learn high-quality representations from distributed graph data with privacy preservation. Despite considerable efforts have been made for FGL under either cross-device or cross-silo paradigm, how to effectively capture graph knowledge in a more complicated cross-silo cross-device environment remains an under-explored problem. However, this task is challenging because of the inherent hierarchy and heterogeneity of decentralized clients, diversified privacy constraints in different clients, and the cross-client graph integrity requirement. To this end, in this paper, we propose a Hierarchical Federated Graph Learning (HiFGL) framework for cross-silo cross-device FGL. Specifically, we devise a unified hierarchical architecture to safeguard federated GNN training on heterogeneous clients while ensuring graph integrity. Moreover, we propose a Secret Message Passing (SecMP) scheme to shield unauthorized access to subgraph-level and node-level sensitive information simultaneously. Theoretical analysis proves that HiFGL achieves multi-level privacy preservation with complexity guarantees. Extensive experiments on real-world datasets validate the superiority of the proposed framework against several baselines. Furthermore, HiFGL's versatile nature allows for its application in either solely cross-silo or cross-device settings, further broadening its utility in real-world FGL applications.
Related papers
- Against Multifaceted Graph Heterogeneity via Asymmetric Federated Prompt Learning [5.813912301780917]
We propose a Federated Graph Prompt Learning (FedGPL) framework to efficiently enable prompt-based asymmetric graph knowledge transfer.
We conduct theoretical analyses and extensive experiments to demonstrate the significant accuracy and efficiency effectiveness of FedGPL.
arXiv Detail & Related papers (2024-11-04T11:42:25Z) - FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference [31.796411806840087]
Federated Graph Learning (pFGL) facilitates the decentralized training of Graph Neural Networks (GNNs) without compromising privacy.
Previous pFGL methods incorrectly share non-generic knowledge globally and fail to tailor personalized solutions locally.
We propose our pFGL framework FedSSP which Shares generic Spectral knowledge while satisfying graph Preferences.
arXiv Detail & Related papers (2024-10-26T07:09:27Z) - Federated Graph Learning with Structure Proxy Alignment [43.13100155569234]
Federated Graph Learning (FGL) aims to learn graph learning models over graph data distributed in multiple data owners.
We propose FedSpray, a novel FGL framework that learns local class-wise structure proxies in the latent space.
Our goal is to obtain the aligned structure proxies that can serve as reliable, unbiased neighboring information for node classification.
arXiv Detail & Related papers (2024-08-18T07:32:54Z) - SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation [16.599474223790843]
Federated Graph Learning (FGL) has garnered widespread attention by enabling collaborative training on multiple clients for classification tasks.
We propose a novel FGL framework, named SpreadFGL, to promote the information flow in edge-client collaboration.
We show that SpreadFGL achieves higher accuracy and faster convergence against state-of-the-art algorithms.
arXiv Detail & Related papers (2024-07-14T09:34:19Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - AdaFGL: A New Paradigm for Federated Node Classification with Topology
Heterogeneity [44.11777886421429]
Federated Graph Learning (FGL) has attracted significant attention as a distributed framework based on graph neural networks.
We introduce the concept of structure Non-iid split and then present a new paradigm called underlineAdaptive underlineFederated underlineGraph underlineLearning (AdaFGL)
Our proposed AdaFGL outperforms baselines by significant margins of 3.24% and 5.57% on community split and structure Non-iid split, respectively.
arXiv Detail & Related papers (2024-01-22T08:23:31Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - Efficient Multi-View Graph Clustering with Local and Global Structure
Preservation [59.49018175496533]
We propose a novel anchor-based multi-view graph clustering framework termed Efficient Multi-View Graph Clustering with Local and Global Structure Preservation (EMVGC-LG)
Specifically, EMVGC-LG jointly optimize anchor construction and graph learning to enhance the clustering quality.
In addition, EMVGC-LG inherits the linear complexity of existing AMVGC methods respecting the sample number.
arXiv Detail & Related papers (2023-08-31T12:12:30Z) - FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient
Package for Federated Graph Learning [65.48760613529033]
Federated graph learning (FGL) has not been well supported due to its unique characteristics and requirements.
We first discuss the challenges in creating an easy-to-use FGL package and accordingly present our implemented package FederatedScope-GNN (FS-G)
We validate the effectiveness of FS-G by conducting extensive experiments, which simultaneously gains many valuable insights about FGL for the community.
arXiv Detail & Related papers (2022-04-12T06:48:06Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.