Subgraph Federated Learning for Local Generalization
- URL: http://arxiv.org/abs/2503.03995v1
- Date: Thu, 06 Mar 2025 01:08:01 GMT
- Title: Subgraph Federated Learning for Local Generalization
- Authors: Sungwon Kim, Yoonho Lee, Yunhak Oh, Namkyeong Lee, Sukwon Yun, Junseok Lee, Sein Kim, Carl Yang, Chanyoung Park,
- Abstract summary: Federated Learning (FL) on graphs enables collaborative model training to enhance performance without compromising privacy of each client.<n>Existing methods often overlook the mutable nature of graph data, which frequently introduces new nodes and leads to shifts in label distribution.<n>Our proposed method, FedLoG, effectively tackles this issue by mitigating local overfitting.
- Score: 41.64806982207585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) on graphs enables collaborative model training to enhance performance without compromising the privacy of each client. However, existing methods often overlook the mutable nature of graph data, which frequently introduces new nodes and leads to shifts in label distribution. Since they focus solely on performing well on each client's local data, they are prone to overfitting to their local distributions (i.e., local overfitting), which hinders their ability to generalize to unseen data with diverse label distributions. In contrast, our proposed method, FedLoG, effectively tackles this issue by mitigating local overfitting. Our model generates global synthetic data by condensing the reliable information from each class representation and its structural information across clients. Using these synthetic data as a training set, we alleviate the local overfitting problem by adaptively generalizing the absent knowledge within each local dataset. This enhances the generalization capabilities of local models, enabling them to handle unseen data effectively. Our model outperforms baselines in our proposed experimental settings, which are designed to measure generalization power to unseen data in practical scenarios. Our code is available at https://github.com/sung-won-kim/FedLoG
Related papers
- FedHERO: A Federated Learning Approach for Node Classification Task on Heterophilic Graphs [55.51300642911766]
Federated Graph Learning (FGL) empowers clients to collaboratively train Graph neural networks (GNNs) in a distributed manner.
FGL methods usually require that the graph data owned by all clients is homophilic to ensure similar neighbor distribution patterns of nodes.
We propose FedHERO, an FGL framework designed to harness and share insights from heterophilic graphs effectively.
arXiv Detail & Related papers (2025-04-29T22:23:35Z) - Conditioning on Local Statistics for Scalable Heterogeneous Federated Learning [0.0]
Federated learning is a distributed machine learning approach where multiple clients collaboratively train a model without sharing their local data.
We propose to use local characteristic statistics, by which we mean some statistical properties calculated independently by each client.
During training, these local statistics help the model learn how to condition on the local data distribution, and during inference, they guide the client's predictions.
arXiv Detail & Related papers (2025-03-01T07:10:58Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Skewed Label Learning with Logits Fusion [23.062650578266837]
Federated learning (FL) aims to collaboratively train a shared model across multiple clients without transmitting their local data.
We propose FedBalance, which corrects the optimization bias among local models by calibrating their logits.
Our method can gain 13% higher average accuracy compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-11-14T14:37:33Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous
Federated Learning [9.975023463908496]
Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data.
We propose a novel regularization technique based on adaptive self-distillation (ASD) for training models on the client side.
Our regularization scheme adaptively adjusts to the client's training data based on the global model entropy and the client's label distribution.
arXiv Detail & Related papers (2023-05-31T07:00:42Z) - The Best of Both Worlds: Accurate Global and Personalized Models through
Federated Learning with Data-Free Hyper-Knowledge Distillation [17.570719572024608]
FedHKD (Federated Hyper-Knowledge Distillation) is a novel FL algorithm in which clients rely on knowledge distillation to train local models.
Unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server.
We conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance.
arXiv Detail & Related papers (2023-01-21T16:20:57Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Tackling the Local Bias in Federated Graph Learning [48.887310972708036]
In Federated graph learning (FGL), a global graph is distributed across different clients, where each client holds a subgraph.
Existing FGL methods fail to effectively utilize cross-client edges, losing structural information during the training.
We propose a novel FGL framework to make the local models similar to the model trained in a centralized setting.
arXiv Detail & Related papers (2021-10-22T08:22:36Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.