Motley: Benchmarking Heterogeneity and Personalization in Federated
Learning
- URL: http://arxiv.org/abs/2206.09262v1
- Date: Sat, 18 Jun 2022 18:18:49 GMT
- Title: Motley: Benchmarking Heterogeneity and Personalization in Federated
Learning
- Authors: Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ziyu Liu, Zheng Xu,
Virginia Smith
- Abstract summary: Motley is a benchmark for personalized federated learning.
It consists of a suite of cross-device and cross-silo federated datasets from varied problem domains.
We establish baselines on the benchmark by comparing a number of representative personalized federated learning methods.
- Score: 20.66924459164993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning considers learning models unique to each
client in a heterogeneous network. The resulting client-specific models have
been purported to improve metrics such as accuracy, fairness, and robustness in
federated networks. However, despite a plethora of work in this area, it
remains unclear: (1) which personalization techniques are most effective in
various settings, and (2) how important personalization truly is for realistic
federated applications. To better answer these questions, we propose Motley, a
benchmark for personalized federated learning. Motley consists of a suite of
cross-device and cross-silo federated datasets from varied problem domains, as
well as thorough evaluation metrics for better understanding the possible
impacts of personalization. We establish baselines on the benchmark by
comparing a number of representative personalized federated learning methods.
These initial results highlight strengths and weaknesses of existing
approaches, and raise several open questions for the community. Motley aims to
provide a reproducible means with which to advance developments in personalized
and heterogeneity-aware federated learning, as well as the related areas of
transfer learning, meta-learning, and multi-task learning.
Related papers
- FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients [13.98392319567057]
Federated Learning (FL) is a distributed machine learning paradigm that achieves a globally robust model through decentralized computation and periodic model synthesis.
Despite their wide adoption, existing FL and PFL works have yet to comprehensively address the class-imbalance issue.
We propose FedReMa, an efficient PFL algorithm that can tackle class-imbalance by utilizing an adaptive inter-client co-learning approach.
arXiv Detail & Related papers (2024-11-04T05:44:28Z) - Addressing Skewed Heterogeneity via Federated Prototype Rectification with Personalization [35.48757125452761]
Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices.
A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data.
We propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification.
arXiv Detail & Related papers (2024-08-15T06:26:46Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - An Empirical Study of Personalized Federated Learning [8.641606056228675]
Federated learning is a distributed machine learning approach in which a single server and multiple clients collaboratively build machine learning models without sharing datasets on clients.
To cope with this issue, numerous federated learning methods aim at personalized federated learning and build optimized models for clients.
It is unclear which personalized federate learning method achieves the best performance and how much progress can be made by using these methods instead of standard (i.e., non-personalized) federated learning.
arXiv Detail & Related papers (2022-06-27T11:08:16Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Adapt to Adaptation: Learning Personalization for Cross-Silo Federated
Learning [6.0088002781256185]
Conventional federated learning aims to train a global model for a federation of clients with decentralized data.
The distribution shift across non-IID datasets, also known as the data heterogeneity, often poses a challenge for this one-global-model-fits-all solution.
We propose APPLE, a personalized cross-silo FL framework that adaptively learns how much each client can benefit from other clients' models.
arXiv Detail & Related papers (2021-10-15T22:23:14Z) - Practical One-Shot Federated Learning for Cross-Silo Setting [114.76232507580067]
One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
arXiv Detail & Related papers (2020-10-02T14:09:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.