Learning-Based Client Selection for Federated Learning Services Over
Wireless Networks with Constrained Monetary Budgets
- URL: http://arxiv.org/abs/2208.04322v1
- Date: Mon, 8 Aug 2022 06:00:07 GMT
- Title: Learning-Based Client Selection for Federated Learning Services Over
Wireless Networks with Constrained Monetary Budgets
- Authors: Zhipeng Cheng, Xuwei Fan, Minghui Liwang, Ning Chen, Xianbin Wang
- Abstract summary: We investigate a data quality-aware dynamic client selection problem for multiple federated learning (FL) services in a wireless network.
A multi-agent hybrid deep reinforcement learning-based algorithm is proposed to optimize the joint client selection and payment actions.
- Score: 8.285974405319735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate a data quality-aware dynamic client selection problem for
multiple federated learning (FL) services in a wireless network, where each
client has dynamic datasets for the simultaneous training of multiple FL
services and each FL service demander has to pay for the clients with
constrained monetary budgets. The problem is formalized as a non-cooperative
Markov game over the training rounds. A multi-agent hybrid deep reinforcement
learning-based algorithm is proposed to optimize the joint client selection and
payment actions, while avoiding action conflicts. Simulation results indicate
that our proposed algorithm can significantly improve the training performance.
Related papers
- FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Fair Concurrent Training of Multiple Models in Federated Learning [32.74516106486226]
Federated learning (FL) enables collaborative learning across multiple clients.
Recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously.
Current MMFL algorithms use naive average-based client-task allocation schemes.
We propose a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round.
arXiv Detail & Related papers (2024-04-22T02:41:10Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Price-Discrimination Game for Distributed Resource Management in Federated Learning [3.724337025141794]
In vanilla federated learning (FL) such as FedAvg, the parameter server (PS) and multiple distributed clients can form a typical buyer's market.
This paper proposes to differentiate the pricing for services provided by different clients rather than simply providing the same service pricing for different clients.
arXiv Detail & Related papers (2023-08-26T10:09:46Z) - Addressing Client Drift in Federated Continual Learning with Adaptive
Optimization [10.303676184878896]
We outline a framework for performing Federated Continual Learning (FCL) by using NetTailor as a candidate continual learning approach.
We show that adaptive federated optimization can reduce the adverse impact of client drift and showcase its effectiveness on CIFAR100, MiniImagenet, and Decathlon benchmarks.
arXiv Detail & Related papers (2022-03-24T20:00:03Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.