Rethinking Federated Graph Learning: A Data Condensation Perspective
- URL: http://arxiv.org/abs/2505.02573v1
- Date: Mon, 05 May 2025 11:23:29 GMT
- Title: Rethinking Federated Graph Learning: A Data Condensation Perspective
- Authors: Hao Zhang, Xunkai Li, Yinlin Zhu, Lianglin Hu,
- Abstract summary: Federated graph learning (FGL) promotes collaborative training of graph neural networks (GNNs) by multi-client graphs.<n>We introduce the concept of a condensed graph as a novel optimization carrier to address FGL data heterogeneity.<n>Specifically, we utilize a generalized condensation graph consensus to aggregate comprehensive knowledge from distributed graphs.
- Score: 4.044673636393338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated graph learning is a widely recognized technique that promotes collaborative training of graph neural networks (GNNs) by multi-client graphs.However, existing approaches heavily rely on the communication of model parameters or gradients for federated optimization and fail to adequately address the data heterogeneity introduced by intricate and diverse graph distributions. Although some methods attempt to share additional messages among the server and clients to improve federated convergence during communication, they introduce significant privacy risks and increase communication overhead. To address these issues, we introduce the concept of a condensed graph as a novel optimization carrier to address FGL data heterogeneity and propose a new FGL paradigm called FedGM. Specifically, we utilize a generalized condensation graph consensus to aggregate comprehensive knowledge from distributed graphs, while minimizing communication costs and privacy risks through a single transmission of the condensed data. Extensive experiments on six public datasets consistently demonstrate the superiority of FedGM over state-of-the-art baselines, highlighting its potential for a novel FGL paradigm.
Related papers
- Federated Prototype Graph Learning [33.38948169766356]
Federated Graph Learning (FGL) has gained significant attention for its distributed training capabilities.<n>FEMAIL: We propose FedPG as a general prototype-guided optimization method for the above multi-level FGL heterogeneity.<n> Experiments demonstrate that FedPG outperforms SOTA baselines by an average of 3.57% in accuracy while reducing communication costs by 168x.
arXiv Detail & Related papers (2025-04-13T09:21:21Z) - Towards Federated Graph Learning in One-shot Communication [27.325478113745206]
Federated Graph Learning (FGL) has emerged as a promising paradigm for breaking data silos among distributed private graphs.<n>One-shot Federated Learning (OFL) enables collaboration in a single round, but existing OFL methods are ineffective for graph data.<n>We propose the first $textbfO-pFGL$ method ($textbfO-pFGL$) for node classification, compatible with Secure Aggregation protocols for privacy preservation.
arXiv Detail & Related papers (2024-11-18T05:59:29Z) - Against Multifaceted Graph Heterogeneity via Asymmetric Federated Prompt Learning [5.813912301780917]
We propose a Federated Graph Prompt Learning (FedGPL) framework to efficiently enable prompt-based asymmetric graph knowledge transfer.
We conduct theoretical analyses and extensive experiments to demonstrate the significant accuracy and efficiency effectiveness of FedGPL.
arXiv Detail & Related papers (2024-11-04T11:42:25Z) - Federated Graph Condensation with Information Bottleneck Principles [44.404509071881364]
We propose and study the novel problem of federated graph condensation (FGC) for graph neural networks (GNNs)<n>Under the federated setting, the condensed graph will consistently leak data membership privacy.<n>Our framework consistently protects membership privacy during training.
arXiv Detail & Related papers (2024-05-07T00:08:15Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - CONVERT:Contrastive Graph Clustering with Reliable Augmentation [110.46658439733106]
We propose a novel CONtrastiVe Graph ClustEring network with Reliable AugmenTation (CONVERT)
In our method, the data augmentations are processed by the proposed reversible perturb-recover network.
To further guarantee the reliability of semantics, a novel semantic loss is presented to constrain the network.
arXiv Detail & Related papers (2023-08-17T13:07:09Z) - EGRC-Net: Embedding-induced Graph Refinement Clustering Network [66.44293190793294]
We propose a novel graph clustering network called Embedding-Induced Graph Refinement Clustering Network (EGRC-Net)
EGRC-Net effectively utilizes the learned embedding to adaptively refine the initial graph and enhance the clustering performance.
Our proposed methods consistently outperform several state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-19T09:08:43Z) - Privatized Graph Federated Learning [57.14673504239551]
We introduce graph federated learning, which consists of multiple units connected by a graph.
We show how graph homomorphic perturbations can be used to ensure the algorithm is differentially private.
arXiv Detail & Related papers (2022-03-14T13:48:23Z) - FedGCN: Convergence-Communication Tradeoffs in Federated Training of
Graph Convolutional Networks [14.824579000821272]
We introduce the Federated Graph Convolutional Network (FedGCN) algorithm, which uses federated learning to train GCN models for semi-supervised node classification.
Compared to prior methods that require extra communication among clients at each training round, FedGCN clients only communicate with the central server in one pre-training step.
Experimental results show that our FedGCN algorithm achieves better model accuracy with 51.7% faster convergence on average and at least 100X less communication compared to prior work.
arXiv Detail & Related papers (2022-01-28T21:39:16Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.