Resource Management for Blockchain-enabled Federated Learning: A Deep
Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2004.04104v2
- Date: Fri, 1 May 2020 05:51:28 GMT
- Title: Resource Management for Blockchain-enabled Federated Learning: A Deep
Reinforcement Learning Approach
- Authors: Nguyen Quang Hieu, Tran The Anh, Nguyen Cong Luong, Dusit Niyato, Dong
In Kim, Erik Elmroth
- Abstract summary: Federated Learning (BFL) enables mobile devices to collaboratively train neural network models required by a Machine Learning Model Owner (MLMO)
The issue of BFL is that the mobile devices have energy and CPU constraints that may reduce the system lifetime and training efficiency.
We propose to use the Deep Reinforcement Learning (DRL) to derive the optimal decisions for theO.
- Score: 54.29213445674221
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blockchain-enabled Federated Learning (BFL) enables mobile devices to
collaboratively train neural network models required by a Machine Learning
Model Owner (MLMO) while keeping data on the mobile devices. Then, the model
updates are stored in the blockchain in a decentralized and reliable manner.
However, the issue of BFL is that the mobile devices have energy and CPU
constraints that may reduce the system lifetime and training efficiency. The
other issue is that the training latency may increase due to the blockchain
mining process. To address these issues, the MLMO needs to (i) decide how much
data and energy that the mobile devices use for the training and (ii) determine
the block generation rate to minimize the system latency, energy consumption,
and incentive cost while achieving the target accuracy for the model. Under the
uncertainty of the BFL environment, it is challenging for the MLMO to determine
the optimal decisions. We propose to use the Deep Reinforcement Learning (DRL)
to derive the optimal decisions for the MLMO.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT [87.4910758026772]
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development.
This paper explores the "less is more" paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource constrained devices.
arXiv Detail & Related papers (2024-02-26T18:59:03Z) - The Implications of Decentralization in Blockchained Federated Learning: Evaluating the Impact of Model Staleness and Inconsistencies [2.6391879803618115]
We study the practical implications of outsourcing the orchestration of federated learning to a democratic setting such as in a blockchain.
Using simulation, we evaluate the blockchained FL operation by applying two different ML models on the well-known MNIST and CIFAR-10 datasets.
Our results show the high impact of model inconsistencies on the accuracy of the models (up to a 35% decrease in prediction accuracy)
arXiv Detail & Related papers (2023-10-11T13:18:23Z) - Post Quantum Secure Blockchain-based Federated Learning for Mobile Edge
Computing [21.26290266786857]
We employ Federated Learning (FL) and prominent features of blockchain into Mobile Edge Computing architecture.
FL is advantageous for mobile devices with constrained connectivity since it requires model updates to be delivered to a central point.
We propose a fully asynchronoused Federated Learning framework referred to as BFL-MEC, in which the mobile clients evolve independently yet guarantee stability in the global learning process.
arXiv Detail & Related papers (2023-02-26T08:08:23Z) - How Much Does It Cost to Train a Machine Learning Model over Distributed
Data Sources? [4.222078489059043]
Federated learning allows devices to train a machine learning model without sharing their raw data.
Server-less FL approaches like gossip federated learning (GFL) and blockchain-enabled federated learning (BFL) have been proposed to mitigate these issues.
GFL is able to save the 18% of training time, the 68% of energy and the 51% of data to be shared with respect to the CFL solution, but it is not able to reach the level of accuracy of CFL.
BFL represents a viable solution for implementing decentralized learning with a higher level of security, at the cost of an extra energy usage and data sharing
arXiv Detail & Related papers (2022-09-15T08:13:40Z) - Latency Optimization for Blockchain-Empowered Federated Learning in
Multi-Server Edge Computing [24.505675843652448]
In this paper, we study a new latency optimization problem for federated learning (BFL) in multi-server edge computing.
In this system model, distributed mobile devices (MDs) communicate with a set of edge servers (ESs) to handle both machine learning (ML) model training and block mining simultaneously.
arXiv Detail & Related papers (2022-03-18T00:38:29Z) - Incentive Mechanism Design for Joint Resource Allocation in
Blockchain-based Federated Learning [23.64441447666488]
We propose an incentive mechanism to assign each client appropriate rewards for training and mining.
We transform the Stackelberg game model into two optimization problems, which are sequentially solved to derive the optimal strategies for both the model owner and clients.
arXiv Detail & Related papers (2022-02-18T02:19:26Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.