Personalised Federated Learning: A Combinational Approach
- URL: http://arxiv.org/abs/2108.09618v1
- Date: Sun, 22 Aug 2021 02:11:20 GMT
- Title: Personalised Federated Learning: A Combinational Approach
- Authors: Sone Kyaw Pye and Han Yu
- Abstract summary: Federated learning (FL) is a distributed machine learning approach involving multiple clients collaboratively training a shared model.
Privacy and integrity preserving features such as differential privacy (DP) and robust aggregation (RA) are commonly used in FL.
In this work, we show that on common deep learning tasks, the performance of FL models differs amongst clients and situations.
- Score: 10.204907134342637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed machine learning approach involving
multiple clients collaboratively training a shared model. Such a system has the
advantage of more training data from multiple clients, but data can be
non-identically and independently distributed (non-i.i.d.). Privacy and
integrity preserving features such as differential privacy (DP) and robust
aggregation (RA) are commonly used in FL. In this work, we show that on common
deep learning tasks, the performance of FL models differs amongst clients and
situations, and FL models can sometimes perform worse than local models due to
non-i.i.d. data. Secondly, we show that incorporating DP and RA degrades
performance further. Then, we conduct an ablation study on the performance
impact of different combinations of common personalization approaches for FL,
such as finetuning, mixture-of-experts ensemble, multi-task learning, and
knowledge distillation. It is observed that certain combinations of
personalization approaches are more impactful in certain scenarios while others
always improve performance, and combination approaches are better than
individual ones. Most clients obtained better performance with combined
personalized FL and recover from performance degradation caused by non-i.i.d.
data, DP, and RA.
Related papers
- Not All Clients Are Equal: Personalized Federated Learning on Heterogeneous Multi-Modal Clients [52.14230635007546]
Foundation models have shown remarkable capabilities across diverse multi-modal tasks, but their centralized training raises privacy concerns and induces high transmission costs.<n>For the growing demand for personalizing AI models for different user purposes, personalized federated learning (PFL) has emerged.<n>PFL allows each client to leverage the knowledge of other clients for further adaptation to individual user preferences, again without the need to share data.
arXiv Detail & Related papers (2025-05-20T09:17:07Z) - Balancing Similarity and Complementarity for Federated Learning [91.65503655796603]
Federated Learning (FL) is increasingly important in mobile and IoT systems.
One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data.
We introduce a novel framework, textttFedSaC, which balances similarity and complementarity in FL cooperation.
arXiv Detail & Related papers (2024-05-16T08:16:19Z) - How to Collaborate: Towards Maximizing the Generalization Performance in
Cross-Silo Federated Learning [12.86056968708516]
Federated clustering (FL) has vivid attention as a privacy-preserving distributed learning framework.
In this work, we focus on cross-silo FL, where clients become the model owners after FL data.
We formulate that the performance of a client can be improved only by collaborating with other clients that have more training data.
arXiv Detail & Related papers (2024-01-24T05:41:34Z) - Contrastive encoder pre-training-based clustered federated learning for
heterogeneous data [17.580390632874046]
Federated learning (FL) enables distributed clients to collaboratively train a global model while preserving their data privacy.
We propose contrastive pre-training-based clustered federated learning (CP-CFL) to improve the model convergence and overall performance of FL systems.
arXiv Detail & Related papers (2023-11-28T05:44:26Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Optimizing the Collaboration Structure in Cross-Silo Federated Learning [43.388911479025225]
In federated learning (FL), multiple clients collaborate to train machine learning models together.
We propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions.
Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
arXiv Detail & Related papers (2023-06-10T18:59:50Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - CoFED: Cross-silo Heterogeneous Federated Multi-task Learning via
Co-training [11.198612582299813]
Federated Learning (FL) is a machine learning technique that enables participants to train high-quality models collaboratively without exchanging their private data.
We propose a communication-efficient FL scheme, CoFED, based on pseudo-labeling unlabeled data like co-training.
Experimental results show that CoFED achieves better performance with a lower communication cost.
arXiv Detail & Related papers (2022-02-17T11:34:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.