Latency Optimization for Blockchain-Empowered Federated Learning in
Multi-Server Edge Computing
- URL: http://arxiv.org/abs/2203.09670v1
- Date: Fri, 18 Mar 2022 00:38:29 GMT
- Title: Latency Optimization for Blockchain-Empowered Federated Learning in
Multi-Server Edge Computing
- Authors: Dinh C. Nguyen, Seyyedali Hosseinalipour, David J. Love, Pubudu N.
Pathirana, Christopher G. Brinton
- Abstract summary: In this paper, we study a new latency optimization problem for federated learning (BFL) in multi-server edge computing.
In this system model, distributed mobile devices (MDs) communicate with a set of edge servers (ESs) to handle both machine learning (ML) model training and block mining simultaneously.
- Score: 24.505675843652448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study a new latency optimization problem for
Blockchain-based federated learning (BFL) in multi-server edge computing. In
this system model, distributed mobile devices (MDs) communicate with a set of
edge servers (ESs) to handle both machine learning (ML) model training and
block mining simultaneously. To assist the ML model training for
resource-constrained MDs, we develop an offloading strategy that enables MDs to
transmit their data to one of the associated ESs. We then propose a new
decentralized ML model aggregation solution at the edge layer based on a
consensus mechanism to build a global ML model via peer-to-peer (P2P)-based
Blockchain communications. We then formulate latency-aware BFL as an
optimization aiming to minimize the system latency via joint consideration of
the data offloading decisions, MDs' transmit power, channel bandwidth
allocation for MDs' data offloading, MDs' computational allocation, and hash
power allocation. To address the mixed action space of discrete offloading and
continuous allocation variables, we propose a novel deep reinforcement learning
scheme with a holistic design of a parameterized advantage actor critic (A2C)
algorithm. Additionally, we theoretically characterize the convergence
properties of the proposed BFL system in terms of the aggregation delay,
mini-batch size, and number of P2P communication rounds. Our subsequent
numerical evaluation demonstrates the superior performance of our proposed
scheme over existing approaches in terms of model training efficiency,
convergence rate, and system latency.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - A Blockchain-empowered Multi-Aggregator Federated Learning Architecture
in Edge Computing with Deep Reinforcement Learning Optimization [8.082460100928358]
Federated learning (FL) is emerging as a sought-after distributed machine learning architecture.
With advancements in network infrastructure, FL has been seamlessly integrated into edge computing.
While blockchain technology promises to bolster security, practical deployment on resource-constrained edge devices remains a challenge.
arXiv Detail & Related papers (2023-10-14T20:47:30Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Asynchronous Multi-Model Dynamic Federated Learning over Wireless
Networks: Theory, Modeling, and Optimization [20.741776617129208]
Federated learning (FL) has emerged as a key technique for distributed machine learning (ML)
We first formulate rectangular scheduling steps and functions to capture the impact of system parameters on learning performance.
Our analysis sheds light on the joint impact of device training variables and asynchronous scheduling decisions.
arXiv Detail & Related papers (2023-05-22T21:39:38Z) - Efficient Parallel Split Learning over Resource-constrained Wireless
Edge Networks [44.37047471448793]
In this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL)
We propose an innovative PSL framework, namely, efficient parallel split learning (EPSL) to accelerate model training.
We show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy.
arXiv Detail & Related papers (2023-03-26T16:09:48Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z) - Resource Management for Blockchain-enabled Federated Learning: A Deep
Reinforcement Learning Approach [54.29213445674221]
Federated Learning (BFL) enables mobile devices to collaboratively train neural network models required by a Machine Learning Model Owner (MLMO)
The issue of BFL is that the mobile devices have energy and CPU constraints that may reduce the system lifetime and training efficiency.
We propose to use the Deep Reinforcement Learning (DRL) to derive the optimal decisions for theO.
arXiv Detail & Related papers (2020-04-08T16:29:19Z) - Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of
Partitioned Edge Learning [73.82875010696849]
Machine learning algorithms are deployed at the network edge for training artificial intelligence (AI) models.
This paper focuses on the novel joint design of parameter (computation load) allocation and bandwidth allocation.
arXiv Detail & Related papers (2020-03-10T05:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.