FedMint: Intelligent Bilateral Client Selection in Federated Learning
with Newcomer IoT Devices
- URL: http://arxiv.org/abs/2211.01805v1
- Date: Mon, 31 Oct 2022 12:48:56 GMT
- Title: FedMint: Intelligent Bilateral Client Selection in Federated Learning
with Newcomer IoT Devices
- Authors: Osama Wehbi, Sarhad Arisdakessian, Omar Abdel Wahab, Hadi Otrok, Safa
Otoum, Azzam Mourad, Mohsen Guizani
- Abstract summary: Federated Learning is a novel distributed privacy-preserving learning paradigm.
It enables the collaboration among several participants (e.g., Internet of Things devices) for the training of machine learning models.
We present FedMint, an intelligent client selection approach for federated learning on IoT devices using game theory and bootstrapping mechanism.
- Score: 33.4117184364721
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated Learning (FL) is a novel distributed privacy-preserving learning
paradigm, which enables the collaboration among several participants (e.g.,
Internet of Things devices) for the training of machine learning models.
However, selecting the participants that would contribute to this collaborative
training is highly challenging. Adopting a random selection strategy would
entail substantial problems due to the heterogeneity in terms of data quality,
and computational and communication resources across the participants. Although
several approaches have been proposed in the literature to overcome the problem
of random selection, most of these approaches follow a unilateral selection
strategy. In fact, they base their selection strategy on only the federated
server's side, while overlooking the interests of the client devices in the
process. To overcome this problem, we present in this paper FedMint, an
intelligent client selection approach for federated learning on IoT devices
using game theory and bootstrapping mechanism. Our solution involves the design
of: (1) preference functions for the client IoT devices and federated servers
to allow them to rank each other according to several factors such as accuracy
and price, (2) intelligent matching algorithms that take into account the
preferences of both parties in their design, and (3) bootstrapping technique
that capitalizes on the collaboration of multiple federated servers in order to
assign initial accuracy value for the newly connected IoT devices. Based on our
simulation findings, our strategy surpasses the VanillaFL selection approach in
terms of maximizing both the revenues of the client devices and accuracy of the
global federated learning model.
Related papers
- SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning [48.072207894076556]
Cross-device training is a subfield of learning where the number of clients can reach into the billions.
Standard approaches and local methods are prone to issues as crucial as cross-device similarity.
Our method is the first in its kind, that does not require the objective and provably benefits from clients having similar data.
arXiv Detail & Related papers (2024-05-30T15:07:30Z) - Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning [20.412469498888292]
Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
arXiv Detail & Related papers (2024-05-07T08:44:29Z) - A Comprehensive Survey On Client Selections in Federated Learning [3.438094543455187]
The selection of clients to participate in the training process is a critical factor for the performance of the overall system.
We provide a comprehensive overview of the state-of-the-art client selection techniques in Federated Learning.
arXiv Detail & Related papers (2023-11-12T10:40:43Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Client Selection in Federated Learning based on Gradients Importance [5.263296985310379]
Federated learning (FL) enables multiple devices to collaboratively learn a global model without sharing their personal data.
In this paper, we investigate and design a device selection strategy based on the importance of the gradient norms.
arXiv Detail & Related papers (2021-11-19T11:53:23Z) - Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A
Stackelberg Game Approach [54.28419430315478]
Mobile Edge Learning enables distributed training of Machine Learning models over heterogeneous edge devices.
In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources.
We propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game.
arXiv Detail & Related papers (2021-09-25T17:27:48Z) - SCEI: A Smart-Contract Driven Edge Intelligence Framework for IoT
Systems [15.796325306292134]
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution.
This paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning.
arXiv Detail & Related papers (2021-03-12T02:57:05Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.