AdaFGL: A New Paradigm for Federated Node Classification with Topology
Heterogeneity
- URL: http://arxiv.org/abs/2401.11750v1
- Date: Mon, 22 Jan 2024 08:23:31 GMT
- Title: AdaFGL: A New Paradigm for Federated Node Classification with Topology
Heterogeneity
- Authors: Xunkai Li, Zhengyu Wu, Wentao Zhang, Henan Sun, Rong-Hua Li, Guoren
Wang
- Abstract summary: Federated Graph Learning (FGL) has attracted significant attention as a distributed framework based on graph neural networks.
We introduce the concept of structure Non-iid split and then present a new paradigm called underlineAdaptive underlineFederated underlineGraph underlineLearning (AdaFGL)
Our proposed AdaFGL outperforms baselines by significant margins of 3.24% and 5.57% on community split and structure Non-iid split, respectively.
- Score: 44.11777886421429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Federated Graph Learning (FGL) has attracted significant attention
as a distributed framework based on graph neural networks, primarily due to its
capability to break data silos. Existing FGL studies employ community split on
the homophilous global graph by default to simulate federated semi-supervised
node classification settings. Such a strategy assumes the consistency of
topology between the multi-client subgraphs and the global graph, where
connected nodes are highly likely to possess similar feature distributions and
the same label. However, in real-world implementations, the varying
perspectives of local data engineering result in various subgraph topologies,
posing unique heterogeneity challenges in FGL. Unlike the well-known label
Non-independent identical distribution (Non-iid) problems in federated
learning, FGL heterogeneity essentially reveals the topological divergence
among multiple clients, namely homophily or heterophily. To simulate and handle
this unique challenge, we introduce the concept of structure Non-iid split and
then present a new paradigm called \underline{Ada}ptive \underline{F}ederated
\underline{G}raph \underline{L}earning (AdaFGL), a decoupled two-step
personalized approach. To begin with, AdaFGL employs standard multi-client
federated collaborative training to acquire the federated knowledge extractor
by aggregating uploaded models in the final round at the server. Then, each
client conducts personalized training based on the local subgraph and the
federated knowledge extractor. Extensive experiments on the 12 graph benchmark
datasets validate the superior performance of AdaFGL over state-of-the-art
baselines. Specifically, in terms of test accuracy, our proposed AdaFGL
outperforms baselines by significant margins of 3.24\% and 5.57\% on community
split and structure Non-iid split, respectively.
Related papers
- Federated Graph Learning with Structure Proxy Alignment [43.13100155569234]
Federated Graph Learning (FGL) aims to learn graph learning models over graph data distributed in multiple data owners.
We propose FedSpray, a novel FGL framework that learns local class-wise structure proxies in the latent space.
Our goal is to obtain the aligned structure proxies that can serve as reliable, unbiased neighboring information for node classification.
arXiv Detail & Related papers (2024-08-18T07:32:54Z) - SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation [16.599474223790843]
Federated Graph Learning (FGL) has garnered widespread attention by enabling collaborative training on multiple clients for classification tasks.
We propose a novel FGL framework, named SpreadFGL, to promote the information flow in edge-client collaboration.
We show that SpreadFGL achieves higher accuracy and faster convergence against state-of-the-art algorithms.
arXiv Detail & Related papers (2024-07-14T09:34:19Z) - Federated Graph Semantic and Structural Learning [54.97668931176513]
This paper reveals that local client distortion is brought by both node-level semantics and graph-level structure.
We postulate that a well-structural graph neural network possesses similarity for neighbors due to the inherent adjacency relationships.
We transform the adjacency relationships into the similarity distribution and leverage the global model to distill the relation knowledge into the local model.
arXiv Detail & Related papers (2024-06-27T07:08:28Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning [12.073150043485084]
Federated Graph Learning (FGL) has emerged as a promising way to learn high-quality representations from distributed graph data.
We propose a Hierarchical Federated Graph Learning framework for cross-silo cross-device FGL.
Specifically, we devise a unified hierarchical architecture to safeguard federated GNN training on heterogeneous clients.
arXiv Detail & Related papers (2024-06-15T12:34:40Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning [12.834423184614849]
Subgraph federated learning (subgraph-FL) facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs.
node and topology variation leads to significant differences in the class-wise knowledge reliability of multiple local GNNs.
We propose topology-aware data-free knowledge distillation technology (FedTAD) to enhance reliable knowledge transfer from the local model to the global model.
arXiv Detail & Related papers (2024-04-22T10:19:02Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - FedGT: Federated Node Classification with Scalable Graph Transformer [27.50698154862779]
We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
arXiv Detail & Related papers (2024-01-26T21:02:36Z) - Personalized Subgraph Federated Learning [56.52903162729729]
We introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs.
We propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.
We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs.
arXiv Detail & Related papers (2022-06-21T09:02:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.