An Empirical Study of Personalized Federated Learning
- URL: http://arxiv.org/abs/2206.13190v1
- Date: Mon, 27 Jun 2022 11:08:16 GMT
- Title: An Empirical Study of Personalized Federated Learning
- Authors: Koji Matsuda, Yuya Sasaki, Chuan Xiao, Makoto Onizuka
- Abstract summary: Federated learning is a distributed machine learning approach in which a single server and multiple clients collaboratively build machine learning models without sharing datasets on clients.
To cope with this issue, numerous federated learning methods aim at personalized federated learning and build optimized models for clients.
It is unclear which personalized federate learning method achieves the best performance and how much progress can be made by using these methods instead of standard (i.e., non-personalized) federated learning.
- Score: 8.641606056228675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is a distributed machine learning approach in which a
single server and multiple clients collaboratively build machine learning
models without sharing datasets on clients. A challenging issue of federated
learning is data heterogeneity (i.e., data distributions may differ across
clients). To cope with this issue, numerous federated learning methods aim at
personalized federated learning and build optimized models for clients. Whereas
existing studies empirically evaluated their own methods, the experimental
settings (e.g., comparison methods, datasets, and client setting) in these
studies differ from each other, and it is unclear which personalized federate
learning method achieves the best performance and how much progress can be made
by using these methods instead of standard (i.e., non-personalized) federated
learning. In this paper, we benchmark the performance of existing personalized
federated learning through comprehensive experiments to evaluate the
characteristics of each method. Our experimental study shows that (1) there are
no champion methods, (2) large data heterogeneity often leads to high accurate
predictions, and (3) standard federated learning methods (e.g. FedAvg) with
fine-tuning often outperform personalized federated learning methods. We open
our benchmark tool FedBench for researchers to conduct experimental studies
with various experimental settings.
Related papers
- Blind Federated Learning without initial model [1.104960878651584]
Federated learning is an emerging machine learning approach that allows the construction of a model between several participants who hold their own private data.
This method is secure and privacy-preserving, suitable for training a machine learning model using sensitive data from different sources, such as hospitals.
arXiv Detail & Related papers (2024-04-24T20:10:10Z) - Factor-Assisted Federated Learning for Personalized Optimization with
Heterogeneous Data [6.024145412139383]
Federated learning is an emerging distributed machine learning framework aiming at protecting data privacy.
Data in different clients contain both common knowledge and personalized knowledge.
We develop a novel personalized federated learning framework for heterogeneous data, which we refer to as FedSplit.
arXiv Detail & Related papers (2023-12-07T13:05:47Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Motley: Benchmarking Heterogeneity and Personalization in Federated
Learning [20.66924459164993]
Motley is a benchmark for personalized federated learning.
It consists of a suite of cross-device and cross-silo federated datasets from varied problem domains.
We establish baselines on the benchmark by comparing a number of representative personalized federated learning methods.
arXiv Detail & Related papers (2022-06-18T18:18:49Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - WAFFLE: Weighted Averaging for Personalized Federated Learning [38.241216472571786]
We introduce WAFFLE, a personalized collaborative machine learning algorithm based on SCAFFOLD.
WAFFLE uses the Euclidean distance between clients' updates to weigh their individual contributions.
Our experiments demonstrate the effectiveness of WAFFLE compared with other methods.
arXiv Detail & Related papers (2021-10-13T18:40:54Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.