Intelligent Client Selection for Federated Learning using Cellular
Automata
- URL: http://arxiv.org/abs/2310.00627v2
- Date: Wed, 18 Oct 2023 09:55:02 GMT
- Title: Intelligent Client Selection for Federated Learning using Cellular
Automata
- Authors: Nikolaos Pavlidis, Vasileios Perifanis, Theodoros Panagiotis
Chatzinikolaou, Georgios Ch. Sirakoulis, Pavlos S. Efraimidis
- Abstract summary: FL has emerged as a promising solution for privacy-enhancement and latency in various real-world applications, such as transportation, communications, and healthcare.
We propose Cellular Automaton-based Client Selection (CA-CS) as a novel client selection algorithm.
Our results demonstrate that CA-CS achieves comparable accuracy to the random selection approach, while effectively avoiding high-latency Federated clients.
- Score: 0.5849783371898033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has emerged as a promising solution for
privacy-enhancement and latency minimization in various real-world
applications, such as transportation, communications, and healthcare. FL
endeavors to bring Machine Learning (ML) down to the edge by harnessing data
from million of devices and IoT sensors, thus enabling rapid responses to
dynamic environments and yielding highly personalized results. However, the
increased amount of sensors across diverse applications poses challenges in
terms of communication and resource allocation, hindering the participation of
all devices in the federated process and prompting the need for effective FL
client selection. To address this issue, we propose Cellular Automaton-based
Client Selection (CA-CS), a novel client selection algorithm, which leverages
Cellular Automata (CA) as models to effectively capture spatio-temporal changes
in a fast-evolving environment. CA-CS considers the computational resources and
communication capacity of each participating client, while also accounting for
inter-client interactions between neighbors during the client selection
process, enabling intelligent client selection for online FL processes on data
streams that closely resemble real-world scenarios. In this paper, we present a
thorough evaluation of the proposed CA-CS algorithm using MNIST and CIFAR-10
datasets, while making a direct comparison against a uniformly random client
selection scheme. Our results demonstrate that CA-CS achieves comparable
accuracy to the random selection approach, while effectively avoiding
high-latency clients.
Related papers
- Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning [14.866327821524854]
HiCS-FL is a novel client selection method in which the server estimates statistical heterogeneity of a client's data using the client's update of the network's output layer.
In non-IID settings HiCS-FL achieves faster convergence than state-of-the-art FL client selection schemes.
arXiv Detail & Related papers (2023-09-30T00:29:30Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Adaptive Control of Client Selection and Gradient Compression for
Efficient Federated Learning [28.185096784982544]
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data.
We propose a heterogeneous-aware FL framework, called FedCG, with adaptive client selection and gradient compression.
Experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$times$ speedup compared to other methods.
arXiv Detail & Related papers (2022-12-19T14:19:07Z) - ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme [37.099990745974196]
We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
arXiv Detail & Related papers (2022-11-05T13:41:19Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning [33.742677763076]
Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
arXiv Detail & Related papers (2020-11-16T06:32:31Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.