Accurate and Fast Federated Learning via Combinatorial Multi-Armed
Bandits
- URL: http://arxiv.org/abs/2012.03270v1
- Date: Sun, 6 Dec 2020 14:05:14 GMT
- Title: Accurate and Fast Federated Learning via Combinatorial Multi-Armed
Bandits
- Authors: Taehyeon Kim, Sangmin Bae, Jin-woo Lee, Seyoung Yun
- Abstract summary: Federated learning involves the challenge of biased model averaging and lack of prior knowledge in client sampling.
We propose a novel algorithm called FedCM that addresses the two challenges by utilizing prior knowledge with multi-armed bandit based client sampling.
We show that FedCM significantly outperformed the state-of-the-art algorithms by up to 37.25% and 4.17 times, respectively, in terms of accuracy and convergence rate.
- Score: 11.972842369911872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has emerged as an innovative paradigm of collaborative
machine learning. Unlike conventional machine learning, a global model is
collaboratively learned while data remains distributed over a tremendous number
of client devices, thus not compromising user privacy. However, several
challenges still remain despite its glowing popularity; above all, the global
aggregation in federated learning involves the challenge of biased model
averaging and lack of prior knowledge in client sampling, which, in turn, leads
to high generalization error and slow convergence rate, respectively. In this
work, we propose a novel algorithm called FedCM that addresses the two
challenges by utilizing prior knowledge with multi-armed bandit based client
sampling and filtering biased models with combinatorial model averaging. Based
on extensive evaluations using various algorithms and representative
heterogeneous datasets, we showed that FedCM significantly outperformed the
state-of-the-art algorithms by up to 37.25% and 4.17 times, respectively, in
terms of generalization accuracy and convergence rate.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous
Client Devices using a Computing Power Aware Scheduler [5.550660753625296]
Cross-silo federated learning offers a promising solution to collaboratively train AI models without compromising privacy of local datasets.
In this paper, we introduce an innovative semi-aware Fedasynchronous federated learning algorithm with a computing power scheduler on the server side.
We demonstrate that Fed achieves faster convergence and accuracy than other algorithms when performing federated learning on higher clients.
arXiv Detail & Related papers (2023-09-26T05:03:13Z) - Federated cINN Clustering for Accurate Clustered Federated Learning [33.72494731516968]
Federated Learning (FL) presents an innovative approach to privacy-preserving distributed machine learning.
We propose the Federated cINN Clustering Algorithm (FCCA) to robustly cluster clients into different groups.
arXiv Detail & Related papers (2023-09-04T10:47:52Z) - FedGen: Generalizable Federated Learning for Sequential Data [8.784435748969806]
In many real-world distributed settings, spurious correlations exist due to biases and data sampling issues.
We present a generalizable federated learning framework called FedGen, which allows clients to identify and distinguish between spurious and invariant features.
We show that FedGen results in models that achieve significantly better generalization and can outperform the accuracy of current federated learning approaches by over 24%.
arXiv Detail & Related papers (2022-11-03T15:48:14Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Gradient Masked Averaging for Federated Learning [24.687254139644736]
Federated learning allows a large number of clients with heterogeneous data to coordinate learning of a unified global model.
Standard FL algorithms involve averaging of model parameters or gradient updates to approximate the global model at the server.
We propose a gradient masked averaging approach for FL as an alternative to the standard averaging of client updates.
arXiv Detail & Related papers (2022-01-28T08:42:43Z) - A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison [0.6299766708197883]
Pervasive computing promotes the installation of connected devices in our living spaces in order to provide services.
Two major developments have gained significant momentum recently: an advanced use of edge resources and the integration of machine learning techniques for engineering applications.
We propose a novel aggregation algorithm, termed FedDist, which is able to modify its model architecture by identifying dissimilarities between specific neurons amongst the clients.
arXiv Detail & Related papers (2021-10-19T19:43:28Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.