Towards Personalized Federated Learning via Heterogeneous Model
Reassembly
- URL: http://arxiv.org/abs/2308.08643v3
- Date: Fri, 27 Oct 2023 04:14:43 GMT
- Title: Towards Personalized Federated Learning via Heterogeneous Model
Reassembly
- Authors: Jiaqi Wang, Xingyi Yang, Suhan Cui, Liwei Che, Lingjuan Lyu, Dongkuan
Xu, Fenglong Ma
- Abstract summary: pFedHR is a framework that leverages heterogeneous model reassembly to achieve personalized federated learning.
pFedHR dynamically generates diverse personalized models in an automated manner.
- Score: 84.44268421053043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on addressing the practical yet challenging problem of
model heterogeneity in federated learning, where clients possess models with
different network structures. To track this problem, we propose a novel
framework called pFedHR, which leverages heterogeneous model reassembly to
achieve personalized federated learning. In particular, we approach the problem
of heterogeneous model personalization as a model-matching optimization task on
the server side. Moreover, pFedHR automatically and dynamically generates
informative and diverse personalized candidates with minimal human
intervention. Furthermore, our proposed heterogeneous model reassembly
technique mitigates the adverse impact introduced by using public data with
different distributions from the client data to a certain extent. Experimental
results demonstrate that pFedHR outperforms baselines on three datasets under
both IID and Non-IID settings. Additionally, pFedHR effectively reduces the
adverse impact of using different public data and dynamically generates diverse
personalized models in an automated manner.
Related papers
- Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Dual-Personalizing Adapter for Federated Foundation Models [35.863585349109385]
We propose a new setting, termed test-time personalization, which concentrates on the targeted local task and extends to other tasks that exhibit test-time distribution shifts.
Specifically, a dual-personalizing adapter architecture (FedDPA) is proposed, comprising a global adapter and a local adapter for addressing test-time distribution shifts and personalization.
The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
arXiv Detail & Related papers (2024-03-28T08:19:33Z) - DA-PFL: Dynamic Affinity Aggregation for Personalized Federated Learning [13.393529840544117]
Existing personalized federated learning models prefer to aggregate similar clients with similar data distribution to improve the performance of learning models.
We propose a novel Dynamic Affinity-based Personalized Federated Learning model (DA-PFL) to alleviate the class imbalanced problem.
arXiv Detail & Related papers (2024-03-14T11:12:10Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Distributed Personalized Empirical Risk Minimization [19.087524494290676]
This paper advocates a new paradigm Personalized Empirical Risk Minimization (PERM) to facilitate learning from heterogeneous data sources.
We propose a distributed algorithm that replaces the standard model averaging with model shuffling to simultaneously optimize PERM objectives for all devices.
arXiv Detail & Related papers (2023-10-26T20:07:33Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - On the Efficacy of Adversarial Data Collection for Question Answering:
Results from a Large-Scale Randomized Study [65.17429512679695]
In adversarial data collection (ADC), a human workforce interacts with a model in real time, attempting to produce examples that elicit incorrect predictions.
Despite ADC's intuitive appeal, it remains unclear when training on adversarial datasets produces more robust models.
arXiv Detail & Related papers (2021-06-02T00:48:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.