Edge Intelligence for Energy-efficient Computation Offloading and
Resource Allocation in 5G Beyond
- URL: http://arxiv.org/abs/2011.08442v2
- Date: Wed, 18 Nov 2020 02:48:08 GMT
- Title: Edge Intelligence for Energy-efficient Computation Offloading and
Resource Allocation in 5G Beyond
- Authors: Yueyue Dai, Ke Zhang, Sabita Maharjan, and Yan Zhang
- Abstract summary: 5G beyond is an end-edge-cloud orchestrated network that can exploit heterogeneous capabilities of the end devices, edge servers, and the cloud.
In multi user wireless networks, diverse application requirements and the possibility of various radio access modes for communication among devices make it challenging to design an optimal computation offloading scheme.
Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information.
- Score: 7.953533529450216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 5G beyond is an end-edge-cloud orchestrated network that can exploit
heterogeneous capabilities of the end devices, edge servers, and the cloud and
thus has the potential to enable computation-intensive and delay-sensitive
applications via computation offloading. However, in multi user wireless
networks, diverse application requirements and the possibility of various radio
access modes for communication among devices make it challenging to design an
optimal computation offloading scheme. In addition, having access to complete
network information that includes variables such as wireless channel state, and
available bandwidth and computation resources, is a major issue. Deep
Reinforcement Learning (DRL) is an emerging technique to address such an issue
with limited and less accurate network information. In this paper, we utilize
DRL to design an optimal computation offloading and resource allocation
strategy for minimizing system energy consumption. We first present a
multi-user end-edge-cloud orchestrated network where all devices and base
stations have computation capabilities. Then, we formulate the joint
computation offloading and resource allocation problem as a Markov Decision
Process (MDP) and propose a new DRL algorithm to minimize system energy
consumption. Numerical results based on a real-world dataset demonstrate that
the proposed DRL-based algorithm significantly outperforms the benchmark
policies in terms of system energy consumption. Extensive simulations show that
learning rate, discount factor, and number of devices have considerable
influence on the performance of the proposed algorithm.
Related papers
- Computation Rate Maximization for Wireless Powered Edge Computing With Multi-User Cooperation [10.268239987867453]
This study considers a wireless-powered mobile edge computing system that includes a hybrid access point equipped with a computing unit and multiple Internet of Things (IoT) devices.
We propose a novel muti-user cooperation scheme to improve computation performance, where collaborative clusters are dynamically formed.
Specifically, we aims to maximize the weighted sum computation rate (WSCR) of all the IoT devices in the network.
arXiv Detail & Related papers (2024-01-22T05:22:19Z) - Slimmable Encoders for Flexible Split DNNs in Bandwidth and Resource
Constrained IoT Systems [12.427821850039448]
We propose a novel split computing approach based on slimmable ensemble encoders.
The key advantage of our design is the ability to adapt computational load and transmitted data size in real-time with minimal overhead and time.
Our model outperforms existing solutions in terms of compression efficacy and execution time, especially in the context of weak mobile devices.
arXiv Detail & Related papers (2023-06-22T06:33:12Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv Detail & Related papers (2022-06-13T02:19:20Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Deep Reinforcement Learning Based Mobile Edge Computing for Intelligent
Internet of Things [10.157016543999045]
We devise the system by proposing the offloading strategy intelligently through the deep reinforcement learning algorithm.
Deep Q-Network is used to automatically learn the offloading decision in order to optimize the system performance.
A neural network (NN) is trained to predict the offloading action, where the training data is generated from the environmental system.
In particular, the system cost of latency and energy consumption can be reduced significantly by the proposed deep reinforcement learning based algorithm.
arXiv Detail & Related papers (2020-08-01T11:45:54Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - Multi-agent Reinforcement Learning for Resource Allocation in IoT
networks with Edge Computing [16.129649374251088]
It's challenging for end users to offload computation due to their massive requirements on spectrum and resources.
In this paper, we investigate offloading mechanism with resource allocation in IoT edge computing networks by formulating it as a game.
arXiv Detail & Related papers (2020-04-05T20:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.