MDA: Availability-Aware Federated Learning Client Selection
- URL: http://arxiv.org/abs/2211.14391v1
- Date: Fri, 25 Nov 2022 22:18:24 GMT
- Title: MDA: Availability-Aware Federated Learning Client Selection
- Authors: Amin Eslami Abyane, Steve Drew, Hadi Hemmati
- Abstract summary: This study focuses on an FL setting called cross-device FL, which trains based on a large number of clients.
In vanilla FL, clients are selected randomly, which results in an acceptable accuracy but is not ideal from the overall training time perspective.
New client selection techniques have been proposed to improve the training time by considering individual clients' resources and speed.
- Score: 1.9422756778075616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, a new distributed learning scheme called Federated Learning (FL)
has been introduced. FL is designed so that server never collects user-owned
data meaning it is great at preserving privacy. FL's process starts with the
server sending a model to clients, then the clients train that model using
their data and send the updated model back to the server. Afterward, the server
aggregates all the updates and modifies the global model. This process is
repeated until the model converges. This study focuses on an FL setting called
cross-device FL, which trains based on a large number of clients. Since many
devices may be unavailable in cross-device FL, and communication between the
server and all clients is extremely costly, only a fraction of clients gets
selected for training at each round. In vanilla FL, clients are selected
randomly, which results in an acceptable accuracy but is not ideal from the
overall training time perspective, since some clients are slow and can cause
some training rounds to be slow. If only fast clients get selected the learning
would speed up, but it will be biased toward only the fast clients' data, and
the accuracy degrades. Consequently, new client selection techniques have been
proposed to improve the training time by considering individual clients'
resources and speed. This paper introduces the first availability-aware
selection strategy called MDA. The results show that our approach makes
learning faster than vanilla FL by up to 6.5%. Moreover, we show that resource
heterogeneity-aware techniques are effective but can become even better when
combined with our approach, making it faster than the state-of-the-art
selectors by up to 16%. Lastly, our approach selects more unique clients for
training compared to client selectors that only select fast clients, which
reduces our technique's bias.
Related papers
- Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning [56.21666819468249]
Resource constraints of clients and communication costs pose major problems for training large models in Federated Learning.
We introduce Sparse-ProxSkip, which combines training and acceleration in a sparse setting.
We demonstrate the good performance of Sparse-ProxSkip in extensive experiments.
arXiv Detail & Related papers (2024-05-31T05:21:12Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - Latency Aware Semi-synchronous Client Selection and Model Aggregation
for Wireless Federated Learning [0.6882042556551609]
Federated learning (FL) is a collaborative machine learning framework that requires different clients (e.g., Internet of Things devices) to participate in the machine learning model training process.
Traditional FL process may suffer from the straggler problem in heterogeneous client settings.
We propose a Semisynchronous-client Selection and mOdel aggregation aggregation for federated learNing (LESSON) method that allows all the clients to participate in the whole FL process but with different frequencies.
arXiv Detail & Related papers (2022-10-19T05:59:22Z) - Aergia: Leveraging Heterogeneity in Federated Learning Systems [5.0650178943079]
Federated Learning (FL) relies on clients to update a global model using their local datasets.
Aergia is a novel approach where slow clients freeze the part of their model that is the most computationally intensive to train.
Aergia significantly reduces the training time under heterogeneous settings by up to 27% and 53% compared to FedAvg and TiFL, respectively.
arXiv Detail & Related papers (2022-10-12T12:59:18Z) - Federated Select: A Primitive for Communication- and Memory-Efficient
Federated Learning [4.873569522869751]
Federated learning (FL) is a framework for machine learning across heterogeneous client devices.
We propose a more general procedure in which clients "select" what values are sent to them.
This allows clients to operate on smaller, data-dependent slices.
arXiv Detail & Related papers (2022-08-19T16:26:03Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Inference-Time Personalized Federated Learning [17.60724466773559]
Inference-Time PFL (IT-PFL) is where a model trained on a set of clients needs to be later evaluated on novel unlabeled clients at inference time.
We propose a novel approach to this problem IT-PFL-HN, based on a hypernetwork module and an encoder module.
We find that IT-PFL-HN generalizes better than current FL and PFL methods, especially when the novel client has a large domain shift.
arXiv Detail & Related papers (2021-11-16T10:57:20Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.