Incentive Mechanism Design for Joint Resource Allocation in
Blockchain-based Federated Learning
- URL: http://arxiv.org/abs/2202.10938v1
- Date: Fri, 18 Feb 2022 02:19:26 GMT
- Title: Incentive Mechanism Design for Joint Resource Allocation in
Blockchain-based Federated Learning
- Authors: Zhilin Wang, Qin Hu, Ruinian Li, Minghui Xu, and Zehui Xiong
- Abstract summary: We propose an incentive mechanism to assign each client appropriate rewards for training and mining.
We transform the Stackelberg game model into two optimization problems, which are sequentially solved to derive the optimal strategies for both the model owner and clients.
- Score: 23.64441447666488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blockchain-based federated learning (BCFL) has recently gained tremendous
attention because of its advantages such as decentralization and privacy
protection of raw data. However, there has been few research focusing on the
allocation of resources for clients in BCFL. In the BCFL framework where the FL
clients and the blockchain miners are the same devices, clients broadcast the
trained model updates to the blockchain network and then perform mining to
generate new blocks. Since each client has a limited amount of computing
resources, the problem of allocating computing resources into training and
mining needs to be carefully addressed. In this paper, we design an incentive
mechanism to assign each client appropriate rewards for training and mining,
and then the client will determine the amount of computing power to allocate
for each subtask based on these rewards using the two-stage Stackelberg game.
After analyzing the utilities of the model owner (MO) (i.e., the BCFL task
publisher) and clients, we transform the game model into two optimization
problems, which are sequentially solved to derive the optimal strategies for
both the MO and clients. Further, considering the fact that local training
related information of each client may not be known by others, we extend the
game model with analytical solutions to the incomplete information scenario.
Extensive experimental results demonstrate the validity of our proposed
schemes.
Related papers
- Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Secure Decentralized Learning with Blockchain [13.795131629462798]
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices.
To avoid the single point of failure problem in FL, decentralized learning (DFL) has been proposed to use peer-to-peer communication for model aggregation.
arXiv Detail & Related papers (2023-10-10T23:45:17Z) - Multi-dimensional Data Quick Query for Blockchain-based Federated Learning [6.499393722730449]
We propose a novel data structure to improve the query efficiency within each block named MerkleRB-Tree.
In detail, we leverage Minimal Bounding Rectangle(MBR) and bloom-filters for the query process of multi-dimensional continuous-valued attributes and discrete-valued attributes respectively.
arXiv Detail & Related papers (2023-09-27T01:35:11Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - Resource Management for Blockchain-enabled Federated Learning: A Deep
Reinforcement Learning Approach [54.29213445674221]
Federated Learning (BFL) enables mobile devices to collaboratively train neural network models required by a Machine Learning Model Owner (MLMO)
The issue of BFL is that the mobile devices have energy and CPU constraints that may reduce the system lifetime and training efficiency.
We propose to use the Deep Reinforcement Learning (DRL) to derive the optimal decisions for theO.
arXiv Detail & Related papers (2020-04-08T16:29:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.