Communication-efficient Federated Graph Classification via Generative Diffusion Modeling
- URL: http://arxiv.org/abs/2601.15722v1
- Date: Thu, 22 Jan 2026 07:46:47 GMT
- Title: Communication-efficient Federated Graph Classification via Generative Diffusion Modeling
- Authors: Xiuling Wang, Xin Huang, Haibo Hu, Jianliang Xu,
- Abstract summary: Graph Neural Networks (GNNs) unlock new ways of learning from graph-structured data, proving highly effective in capturing complex relationships and patterns.<n>FGNNs face two significant challenges: high communication overhead from multiple rounds of parameter exchanges and non-IID data characteristics across clients.<n>We introduce CeFGC, a novel FGNN paradigm that facilitates efficient GNN training over non-IID data by limiting communication between the server and clients to three rounds only.
- Score: 20.26837995333675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) unlock new ways of learning from graph-structured data, proving highly effective in capturing complex relationships and patterns. Federated GNNs (FGNNs) have emerged as a prominent distributed learning paradigm for training GNNs over decentralized data. However, FGNNs face two significant challenges: high communication overhead from multiple rounds of parameter exchanges and non-IID data characteristics across clients. To address these issues, we introduce CeFGC, a novel FGNN paradigm that facilitates efficient GNN training over non-IID data by limiting communication between the server and clients to three rounds only. The core idea of CeFGC is to leverage generative diffusion models to minimize direct client-server communication. Each client trains a generative diffusion model that captures its local graph distribution and shares this model with the server, which then redistributes it back to all clients. Using these generative models, clients generate synthetic graphs combined with their local graphs to train local GNN models. Finally, clients upload their model weights to the server for aggregation into a global GNN model. We theoretically analyze the I/O complexity of communication volume to show that CeFGC reduces to a constant of three communication rounds only. Extensive experiments on several real graph datasets demonstrate the effectiveness and efficiency of CeFGC against state-of-the-art competitors, reflecting our superior performance on non-IID graphs by aligning local and global model objectives and enriching the training set with diverse graphs.
Related papers
- GraphFedMIG: Tackling Class Imbalance in Federated Graph Learning via Mutual Information-Guided Generation [19.1700923188257]
Federated graph learning (FGL) enables multiple clients to collaboratively train powerful graph neural networks without sharing their private, decentralized graph data.<n>We propose GraphFedMIG, a novel FGL framework that reframes the problem as a federated generative data augmentation task.<n>We conduct extensive experiments on four real-world datasets, and the results demonstrate the superiority of the proposed GraphFedMIG compared with other baselines.
arXiv Detail & Related papers (2025-08-14T09:16:56Z) - FedHERO: A Federated Learning Approach for Node Classification Task on Heterophilic Graphs [55.51300642911766]
Federated Graph Learning (FGL) empowers clients to collaboratively train Graph neural networks (GNNs) in a distributed manner.<n>FGL methods usually require that the graph data owned by all clients is homophilic to ensure similar neighbor distribution patterns of nodes.<n>We propose FedHERO, an FGL framework designed to harness and share insights from heterophilic graphs effectively.
arXiv Detail & Related papers (2025-04-29T22:23:35Z) - Federated Prototype Graph Learning [33.38948169766356]
Federated Graph Learning (FGL) has gained significant attention for its distributed training capabilities.<n>FEMAIL: We propose FedPG as a general prototype-guided optimization method for the above multi-level FGL heterogeneity.<n> Experiments demonstrate that FedPG outperforms SOTA baselines by an average of 3.57% in accuracy while reducing communication costs by 168x.
arXiv Detail & Related papers (2025-04-13T09:21:21Z) - Knowledge-Driven Federated Graph Learning on Model Heterogeneity [47.98634086448171]
Federated graph learning (FGL) has emerged as a promising paradigm for collaborative graph representation learning.<n>We propose the Federated Graph Knowledge Collaboration (FedGKC) framework to address the challenge of model-centric heterogeneous FGL.<n>FedGKC achieves an average accuracy gain of 3.74% over baselines in MHtFGL scenarios, while maintaining excellent performance in homogeneous settings.
arXiv Detail & Related papers (2025-01-22T04:12:32Z) - One Model for One Graph: A New Perspective for Pretraining with Cross-domain Graphs [59.7297608804716]
Graph Neural Networks (GNNs) have emerged as a powerful tool to capture intricate network patterns.<n>Existing GNNs require careful domain-specific architecture designs and training from scratch on each dataset.<n>We propose a novel cross-domain pretraining framework, "one model for one graph"
arXiv Detail & Related papers (2024-11-30T01:49:45Z) - Federated Graph Learning with Graphless Clients [52.5629887481768]
Federated Graph Learning (FGL) is tasked with training machine learning models, such as Graph Neural Networks (GNNs)
We propose a novel framework FedGLS to tackle the problem in FGL with graphless clients.
arXiv Detail & Related papers (2024-11-13T06:54:05Z) - Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - GLASU: A Communication-Efficient Algorithm for Federated Learning with
Vertically Distributed Graph Data [44.02629656473639]
We propose a model splitting method that splits a backbone GNN across the clients and the server and a communication-efficient algorithm, GLASU, to train such a model.
We offer a theoretical analysis and conduct extensive numerical experiments on real-world datasets, showing that the proposed algorithm effectively trains a GNN model, whose performance matches that of the backbone GNN when trained in a centralized manner.
arXiv Detail & Related papers (2023-03-16T17:47:55Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - FedGCN: Convergence-Communication Tradeoffs in Federated Training of
Graph Convolutional Networks [14.824579000821272]
We introduce the Federated Graph Convolutional Network (FedGCN) algorithm, which uses federated learning to train GCN models for semi-supervised node classification.
Compared to prior methods that require extra communication among clients at each training round, FedGCN clients only communicate with the central server in one pre-training step.
Experimental results show that our FedGCN algorithm achieves better model accuracy with 51.7% faster convergence on average and at least 100X less communication compared to prior work.
arXiv Detail & Related papers (2022-01-28T21:39:16Z) - Learn Locally, Correct Globally: A Distributed Algorithm for Training
Graph Neural Networks [22.728439336309858]
We propose a communication-efficient distributed GNN training technique named $textLearn Locally, Correct Globally$ (LLCG)
LLCG trains a GNN on its local data by ignoring the dependency between nodes among different machines, then sends the locally trained model to the server for periodic model averaging.
We rigorously analyze the convergence of distributed methods with periodic model averaging for training GNNs and show that naively applying periodic model averaging but ignoring the dependency between nodes will suffer from an irreducible residual error.
arXiv Detail & Related papers (2021-11-16T03:07:01Z) - ASFGNN: Automated Separated-Federated Graph Neural Network [17.817867271722093]
We propose an automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm.
We conduct experiments on benchmark datasets and the results demonstrate that ASFGNN significantly outperforms the naive federated GNN.
arXiv Detail & Related papers (2020-11-06T09:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.