Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning
- URL: http://arxiv.org/abs/2011.09849v1
- Date: Mon, 16 Nov 2020 06:32:31 GMT
- Title: Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning
- Authors: Ihab Mohammed, Shadha Tabatabai, Ala Al-Fuqaha, Faissal El Bouanani,
Junaid Qadir, Basheer Qolomany, Mohsen Guizani
- Abstract summary: Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
- Score: 33.742677763076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML), and Deep Learning (DL) in particular, play a vital
role in providing smart services to the industry. These techniques however
suffer from privacy and security concerns since data is collected from clients
and then stored and processed at a central location. Federated Learning (FL),
an architecture in which model parameters are exchanged instead of client data,
has been proposed as a solution to these concerns. Nevertheless, FL trains a
global model by communicating with clients over communication rounds, which
introduces more traffic on the network and increases the convergence time to
the target accuracy. In this work, we solve the problem of optimizing accuracy
in stateful FL with a budgeted number of candidate clients by selecting the
best candidate clients in terms of test accuracy to participate in the training
process. Next, we propose an online stateful FL heuristic to find the best
candidate clients. Additionally, we propose an IoT client alarm application
that utilizes the proposed heuristic in training a stateful FL global model
based on IoT device type classification to alert clients about unauthorized IoT
devices in their environment. To test the efficiency of the proposed online
heuristic, we conduct several experiments using a real dataset and compare the
results against state-of-the-art algorithms. Our results indicate that the
proposed heuristic outperforms the online random algorithm with up to 27% gain
in accuracy. Additionally, the performance of the proposed online heuristic is
comparable to the performance of the best offline algorithm.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning [13.144501509175985]
We propose a TRust-Aware clIent scheduLing mechanism called TRAIL, which assesses client states and contributions.
We focus on a semi-decentralized FL framework where edge servers and clients train a shared global model using unreliable intra-cluster model aggregation and inter-cluster model consensus.
Experiments conducted on real-world datasets demonstrate that TRAIL outperforms state-of-the-art baselines, achieving an improvement of 8.7% in test accuracy and a reduction of 15.3% in training loss.
arXiv Detail & Related papers (2024-12-16T05:02:50Z) - GPFL: A Gradient Projection-Based Client Selection Framework for Efficient Federated Learning [6.717563725609496]
Federated learning client selection is crucial for determining participant clients.
We propose GPFL, which measures client value by comparing local and global descent directions.
GPFL exhibits shorter computation times through pre-selection and parameter reuse in federated learning.
arXiv Detail & Related papers (2024-03-26T16:14:43Z) - Intelligent Client Selection for Federated Learning using Cellular
Automata [0.5849783371898033]
FL has emerged as a promising solution for privacy-enhancement and latency in various real-world applications, such as transportation, communications, and healthcare.
We propose Cellular Automaton-based Client Selection (CA-CS) as a novel client selection algorithm.
Our results demonstrate that CA-CS achieves comparable accuracy to the random selection approach, while effectively avoiding high-latency Federated clients.
arXiv Detail & Related papers (2023-10-01T09:40:40Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme [37.099990745974196]
We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
arXiv Detail & Related papers (2022-11-05T13:41:19Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.