GeFL: Model-Agnostic Federated Learning with Generative Models
- URL: http://arxiv.org/abs/2412.18460v2
- Date: Sat, 17 May 2025 02:40:42 GMT
- Title: GeFL: Model-Agnostic Federated Learning with Generative Models
- Authors: Honggu Kang, Seohyeon Cha, Joonhyuk Kang,
- Abstract summary: Federated learning (FL) is a distributed training paradigm that enables collaborative learning across clients without sharing local data, thereby preserving privacy.<n>We propose Generative Model-Aided Federated Learning (GeFL), a framework that enables cross-client knowledge sharing via a generative model trained in a federated manner.
- Score: 3.4546761246181696
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Federated learning (FL) is a distributed training paradigm that enables collaborative learning across clients without sharing local data, thereby preserving privacy. However, the increasing scale and complexity of modern deep models often exceed the computational or memory capabilities of edge devices. Furthermore, clients may be constrained to use heterogeneous model architectures due to hardware variability (e.g., ASICs, FPGAs) or proprietary requirements that prevent the disclosure or modification of local model structures. These practical considerations motivate the need for model-heterogeneous FL, where clients participate using distinct model architectures. In this work, we propose Generative Model-Aided Federated Learning (GeFL), a framework that enables cross-client knowledge sharing via a generative model trained in a federated manner. This generative model captures global data semantics and facilitates local training without requiring model homogeneity across clients. While GeFL achieves strong performance, empirical analysis reveals limitations in scalability and potential privacy leakage due to generative sample memorization. To address these concerns, we propose GeFL-F, which utilizes feature-level generative modeling. This approach enhances scalability to large client populations and mitigates privacy risks. Extensive experiments across image classification tasks demonstrate that both GeFL and GeFL-F offer competitive performance in heterogeneous settings. Code is available at [1].
Related papers
- Hypernetworks for Model-Heterogeneous Personalized Federated Learning [13.408669475480824]
We propose a server-side hypernetwork that takes client-specific embedding vectors as input and outputs personalized parameters tailored to each client's heterogeneous model.<n>To promote knowledge sharing and reduce computation, we introduce a multi-head structure within the hypernetwork, allowing clients with similar model sizes to share heads.<n>Our framework does not rely on external datasets and does not require disclosure of client model architectures.
arXiv Detail & Related papers (2025-07-30T02:24:26Z) - Not All Clients Are Equal: Personalized Federated Learning on Heterogeneous Multi-Modal Clients [52.14230635007546]
Foundation models have shown remarkable capabilities across diverse multi-modal tasks, but their centralized training raises privacy concerns and induces high transmission costs.<n>For the growing demand for personalizing AI models for different user purposes, personalized federated learning (PFL) has emerged.<n>PFL allows each client to leverage the knowledge of other clients for further adaptation to individual user preferences, again without the need to share data.
arXiv Detail & Related papers (2025-05-20T09:17:07Z) - FedSKD: Aggregation-free Model-heterogeneous Federated Learning using Multi-dimensional Similarity Knowledge Distillation [7.944298319589845]
Federated learning (FL) enables privacy-preserving collaborative model training without direct data sharing.<n>Model-heterogeneous FL (MHFL) allows clients to train personalized models with heterogeneous architectures tailored to their computational resources and application-specific needs.<n>While peer-to-peer (P2P) FL removes server dependence, it suffers from model drift and knowledge dilution, limiting its effectiveness in heterogeneous settings.<n>We propose FedSKD, a novel MHFL framework that facilitates direct knowledge exchange through round-robin model circulation.
arXiv Detail & Related papers (2025-03-23T05:33:10Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron Provenance [8.18537013016659]
In Federated Learning, clients train models on local data and send updates to a central server, which aggregates them into a global model using a fusion algorithm.
This collaborative yet privacy-preserving training comes at a cost--FL developers face significant challenges in attributing global model predictions to specific clients.
We introduce TraceFL, a fine-grained neuron provenance capturing mechanism that identifies clients responsible for the global model's prediction by tracking the flow of information from individual clients to the global model.
arXiv Detail & Related papers (2023-12-21T07:48:54Z) - Contrastive encoder pre-training-based clustered federated learning for
heterogeneous data [17.580390632874046]
Federated learning (FL) enables distributed clients to collaboratively train a global model while preserving their data privacy.
We propose contrastive pre-training-based clustered federated learning (CP-CFL) to improve the model convergence and overall performance of FL systems.
arXiv Detail & Related papers (2023-11-28T05:44:26Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - NeFL: Nested Model Scaling for Federated Learning with System Heterogeneous Clients [44.89061671579694]
Federated learning (FL) enables distributed training while preserving data privacy, but stragglers-slow or incapable clients-can significantly slow down the total training time and degrade performance.
We propose nested federated learning (NeFL), a framework that efficiently divides deep neural networks into submodels using both depthwise and widthwise scaling.
NeFL achieves performance gain, especially for the worst-case submodel compared to baseline approaches.
arXiv Detail & Related papers (2023-08-15T13:29:14Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Closing the Gap between Client and Global Model Performance in
Heterogeneous Federated Learning [2.1044900734651626]
We show how the chosen approach for training custom client models has an impact on the global model.
We propose a new approach that combines KD and Learning without Forgetting (LwoF) to produce improved personalised models.
arXiv Detail & Related papers (2022-11-07T11:12:57Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - Personalized Federated Learning with Clustered Generalization [16.178571176116073]
We study the recent emerging personalized learning (PFL) that aims at dealing with the challenging problem of Non-I.I.D. data in the learning setting.
Key difference between PFL and conventional FL methods in the training target.
We propose a novel concept called clustered generalization to handle the challenge of statistical heterogeneity in FL.
arXiv Detail & Related papers (2021-06-24T14:17:00Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.