On Jointly Optimizing Partial Offloading and SFC Mapping: A Cooperative
Dual-agent Deep Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2205.09925v1
- Date: Fri, 20 May 2022 02:00:53 GMT
- Title: On Jointly Optimizing Partial Offloading and SFC Mapping: A Cooperative
Dual-agent Deep Reinforcement Learning Approach
- Authors: Xinhan Wang, Huanlai Xing, Fuhong Song, Shouxi Luo, Penglin Dai, and
Bowen Zhao
- Abstract summary: This paper studies the partial offloading and SFC mapping joint optimization (POSMJO) problem in an computation-enabled MEC system.
The objective is to minimize the average cost in the long term which is a combination of execution delay, MD's energy consumption, and usage charge for edge computing.
We propose a cooperative dual-agent deep reinforcement learning (CDADRL) algorithm, where we design a framework enabling interaction between two agents.
- Score: 8.168647937560504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-access edge computing (MEC) and network function virtualization (NFV)
are promising technologies to support emerging IoT applications, especially
those computation-intensive. In NFV-enabled MEC environment, service function
chain (SFC), i.e., a set of ordered virtual network functions (VNFs), can be
mapped on MEC servers. Mobile devices (MDs) can offload computation-intensive
applications, which can be represented by SFCs, fully or partially to MEC
servers for remote execution. This paper studies the partial offloading and SFC
mapping joint optimization (POSMJO) problem in an NFV-enabled MEC system, where
an incoming task can be partitioned into two parts, one for local execution and
the other for remote execution. The objective is to minimize the average cost
in the long term which is a combination of execution delay, MD's energy
consumption, and usage charge for edge computing. This problem consists of two
closely related decision-making steps, namely task partition and VNF placement,
which is highly complex and quite challenging. To address this, we propose a
cooperative dual-agent deep reinforcement learning (CDADRL) algorithm, where we
design a framework enabling interaction between two agents. Simulation results
show that the proposed algorithm outperforms three combinations of deep
reinforcement learning algorithms in terms of cumulative and average episodic
rewards and it overweighs a number of baseline algorithms with respect to
execution delay, energy consumption, and usage charge.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Computation Rate Maximization for Wireless Powered Edge Computing With Multi-User Cooperation [10.268239987867453]
This study considers a wireless-powered mobile edge computing system that includes a hybrid access point equipped with a computing unit and multiple Internet of Things (IoT) devices.
We propose a novel muti-user cooperation scheme to improve computation performance, where collaborative clusters are dynamically formed.
Specifically, we aims to maximize the weighted sum computation rate (WSCR) of all the IoT devices in the network.
arXiv Detail & Related papers (2024-01-22T05:22:19Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Adaptive DNN Surgery for Selfish Inference Acceleration with On-demand
Edge Resource [25.274288063300844]
Deep Neural Networks (DNNs) have significantly improved the accuracy of intelligent applications on mobile devices.
DNN surgery can enable real-time inference despite the computational limitations of mobile devices.
This paper introduces a novel Decentralized DNN Surgery (DDS) framework.
arXiv Detail & Related papers (2023-06-21T11:32:28Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv Detail & Related papers (2022-06-13T02:19:20Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Federated Double Deep Q-learning for Joint Delay and Energy Minimization
in IoT networks [12.599009485247283]
We propose a federated deep reinforcement learning framework to solve a multi-objective optimization problem.
To enhance the learning speed of IoT devices (agents), we incorporate federated learning (FDL) at the end of each episode.
Our numerical results demonstrate the efficacy of our proposed federated DDQN framework in terms of learning speed.
arXiv Detail & Related papers (2021-04-02T18:41:59Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Computation Offloading in Multi-Access Edge Computing Networks: A
Multi-Task Learning Approach [7.203439085947118]
Multi-access edge computing (MEC) has already shown the potential in enabling mobile devices to bear the computation-intensive applications by offloading some tasks to a nearby access point (AP) integrated with a MEC server (MES)
However, due to the varying network conditions and limited computation resources of the MES, the offloading decisions taken by a mobile device and the computational resources allocated by the MES may not be efficiently achieved with the lowest cost.
We propose a dynamic offloading framework for the MEC network, in which the uplink non-orthogonal multiple access (NOMA) is used to enable multiple devices to upload their
arXiv Detail & Related papers (2020-06-29T15:11:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.