ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme
- URL: http://arxiv.org/abs/2211.02906v1
- Date: Sat, 5 Nov 2022 13:41:19 GMT
- Title: ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme
- Authors: Mario Chahoud, Hani Sami, Azzam Mourad, Safa Otoum, Hadi Otrok, Jamal
Bentahar, Mohsen Guizani
- Abstract summary: We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
- Score: 37.099990745974196
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we increase the availability and integration of devices in the
learning process to enhance the convergence of federated learning (FL) models.
To address the issue of having all the data in one location, federated
learning, which maintains the ability to learn over decentralized data sets,
combines privacy and technology. Until the model converges, the server combines
the updated weights obtained from each dataset over a number of rounds. The
majority of the literature suggested client selection techniques to accelerate
convergence and boost accuracy. However, none of the existing proposals have
focused on the flexibility to deploy and select clients as needed, wherever and
whenever that may be. Due to the extremely dynamic surroundings, some devices
are actually not available to serve as clients in FL, which affects the
availability of data for learning and the applicability of the existing
solution for client selection. In this paper, we address the aforementioned
limitations by introducing an On-Demand-FL, a client deployment approach for
FL, offering more volume and heterogeneity of data in the learning process. We
make use of the containerization technology such as Docker to build efficient
environments using IoT and mobile devices serving as volunteers. Furthermore,
Kubernetes is used for orchestration. The Genetic algorithm (GA) is used to
solve the multi-objective optimization problem due to its evolutionary
strategy. The performed experiments using the Mobile Data Challenge (MDC)
dataset and the Localfed framework illustrate the relevance of the proposed
approach and the efficiency of the on-the-fly deployment of clients whenever
and wherever needed with less discarded rounds and more available data.
Related papers
- Modality Alignment Meets Federated Broadcasting [9.752555511824593]
Federated learning (FL) has emerged as a powerful approach to safeguard data privacy by training models across distributed edge devices without centralizing local data.
This paper introduces a novel FL framework leveraging modality alignment, where a text encoder resides on the server, and image encoders operate on local devices.
arXiv Detail & Related papers (2024-11-24T13:30:03Z) - Optimizing Federated Learning by Entropy-Based Client Selection [13.851391819710367]
Deep learning domains typically require an extensive amount of data for optimal performance.
FedOptEnt is designed to mitigate performance issues caused by label distribution skew.
The proposed method outperforms several state-of-the-art algorithms by up to 6% in classification accuracy.
arXiv Detail & Related papers (2024-11-02T13:31:36Z) - CDFL: Efficient Federated Human Activity Recognition using Contrastive Learning and Deep Clustering [12.472038137777474]
Human Activity Recognition (HAR) is vital for the automation and intelligent identification of human actions through data from diverse sensors.
Traditional machine learning approaches by aggregating data on a central server and centralized processing are memory-intensive and raise privacy concerns.
This work proposes CDFL, an efficient federated learning framework for image-based HAR.
arXiv Detail & Related papers (2024-07-17T03:17:53Z) - Efficient Data Distribution Estimation for Accelerated Federated Learning [5.085889377571319]
Federated Learning(FL) is a privacy-preserving machine learning paradigm where a global model is trained in-situ across a large number of distributed edge devices.
Devices are highly heterogeneous in both their system resources and training data.
Various client selection algorithms have been developed, showing promising performance improvement in terms of model coverage and accuracy.
arXiv Detail & Related papers (2024-06-03T20:33:17Z) - Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning [14.866327821524854]
HiCS-FL is a novel client selection method in which the server estimates statistical heterogeneity of a client's data using the client's update of the network's output layer.
In non-IID settings HiCS-FL achieves faster convergence than state-of-the-art FL client selection schemes.
arXiv Detail & Related papers (2023-09-30T00:29:30Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.