An Incentive Mechanism for Federated Learning Based on Multiple Resource
Exchange
- URL: http://arxiv.org/abs/2312.08096v1
- Date: Wed, 13 Dec 2023 12:28:37 GMT
- Title: An Incentive Mechanism for Federated Learning Based on Multiple Resource
Exchange
- Authors: Ruonan Dong, Hui Xu, Han Zhang, GuoPeng Zhang
- Abstract summary: Federated Learning (FL) is a distributed machine learning paradigm that addresses privacy concerns in machine learning.
We introduce a multi-user collaborative computing framework, categorizing users into two roles: model owners (MOs) and data owner (DOs)
We show that the proposed collaborative computing framework can achieve an accuracy of more than 95% while minimizing the overall time to complete an FL task.
- Score: 5.385462087305977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed machine learning paradigm that
addresses privacy concerns in machine learning and still guarantees high test
accuracy. However, achieving the necessary accuracy by having all clients
participate in FL is impractical, given the constraints of client local
computing resource. In this paper, we introduce a multi-user collaborative
computing framework, categorizing users into two roles: model owners (MOs) and
data owner (DOs). Without resorting to monetary incentives, an MO can encourage
more DOs to join in FL by allowing the DOs to offload extra local computing
tasks to the MO for execution. This exchange of "data" for "computing
resources" streamlines the incentives for clients to engage more effectively in
FL. We formulate the interaction between MO and DOs as an optimization problem,
and the objective is to effectively utilize the communication and computing
resource of the MO and DOs to minimize the time to complete an FL task. The
proposed problem is a mixed integer nonlinear programming (MINLP) with high
computational complexity. We first decompose it into two distinct subproblems,
namely the client selection problem and the resource allocation problem to
segregate the integer variables from the continuous variables. Then, an
effective iterative algorithm is proposed to solve problem. Simulation results
demonstrate that the proposed collaborative computing framework can achieve an
accuracy of more than 95\% while minimizing the overall time to complete an FL
task.
Related papers
- A Framework for testing Federated Learning algorithms using an edge-like environment [0.0]
Federated Learning (FL) is a machine learning paradigm in which many clients cooperatively train a single centralized model while keeping their data private and decentralized.
It is non-trivial to accurately evaluate the contributions of local models in global centralized model aggregation.
This is an example of a major challenge in FL, commonly known as data imbalance or class imbalance.
In this work, a framework is proposed and implemented to assess FL algorithms in a more easy and scalable way.
arXiv Detail & Related papers (2024-07-17T19:52:53Z) - Fair Concurrent Training of Multiple Models in Federated Learning [32.74516106486226]
Federated learning (FL) enables collaborative learning across multiple clients.
Recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously.
Current MMFL algorithms use naive average-based client-task allocation schemes.
We propose a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round.
arXiv Detail & Related papers (2024-04-22T02:41:10Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Training Latency Minimization for Model-Splitting Allowed Federated Edge
Learning [16.8717239856441]
We propose a model-splitting allowed FL (SFL) framework to alleviate the shortage of computing power faced by clients in training deep neural networks (DNNs) using federated learning (FL)
Under the synchronized global update setting, the latency to complete a round of global training is determined by the maximum latency for the clients to complete a local training session.
To solve this mixed integer nonlinear programming problem, we first propose a regression method to fit the quantitative-relationship between the cut-layer and other parameters of an AI-model, and thus, transform the TLMP into a continuous problem.
arXiv Detail & Related papers (2023-07-21T12:26:42Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.