Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence
- URL: http://arxiv.org/abs/2201.11410v1
- Date: Thu, 27 Jan 2022 10:02:54 GMT
- Title: Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence
- Authors: Peng Wei, Kun Guo, Ye Li, Jue Wang, Wei Feng, Shi Jin, Ning Ge, and
Ying-Chang Liang
- Abstract summary: Mobile edge computing (MEC) considered a novel paradigm for computation and delay-sensitive tasks in fifth generation (5G) networks and beyond.
This paper provides a comprehensive research review on free-enabled RL and offers insight for development.
- Score: 76.96698721128406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile edge computing (MEC) is considered a novel paradigm for
computation-intensive and delay-sensitive tasks in fifth generation (5G)
networks and beyond. However, its uncertainty, referred to as dynamic and
randomness, from the mobile device, wireless channel, and edge network sides,
results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization
problems. Thanks to the evolved reinforcement learning (RL), upon iteratively
interacting with the dynamic and random environment, its trained agent can
intelligently obtain the optimal policy in MEC. Furthermore, its evolved
versions, such as deep RL (DRL), can achieve higher convergence speed
efficiency and learning accuracy based on the parametric approximation for the
large-scale state-action space. This paper provides a comprehensive research
review on RL-enabled MEC and offers insight for development in this area. More
importantly, associated with free mobility, dynamic channels, and distributed
services, the MEC challenges that can be solved by different kinds of RL
algorithms are identified, followed by how they can be solved by RL solutions
in diverse mobile applications. Finally, the open challenges are discussed to
provide helpful guidance for future research in RL training and learning MEC.
Related papers
- Beyond the Edge: An Advanced Exploration of Reinforcement Learning for Mobile Edge Computing, its Applications, and Future Research Trajectories [13.08054996040995]
Mobile Edge Computing (MEC) broadens the scope of computation and storage beyond the central network.
The advent of applications necessitating real-time, high-quality service presents several challenges, such as low latency, high data rate, reliability, efficiency, and security.
The paper proposes specific RL techniques to mitigate these issues and provides insights into their practical applications.
arXiv Detail & Related papers (2024-04-22T14:47:42Z) - Towards Scalable Wireless Federated Learning: Challenges and Solutions [40.68297639420033]
federated learning (FL) emerges as an effective distributed machine learning framework.
We discuss the challenges and solutions of achieving scalable wireless FL from the perspectives of both network design and resource orchestration.
arXiv Detail & Related papers (2023-10-08T08:55:03Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Evolutionary Deep Reinforcement Learning for Dynamic Slice Management in
O-RAN [11.464582983164991]
New open radio access network (O-RAN) with distinguishing features such as flexible design, disaggregated virtual and programmable components, and intelligent closed-loop control was developed.
O-RAN slicing is being investigated as a critical strategy for ensuring network quality of service (QoS) in the face of changing circumstances.
This paper introduces a novel framework able to manage the network slices through provisioned resources intelligently.
arXiv Detail & Related papers (2022-08-30T17:00:53Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Deep Reinforcement Learning for Adaptive Network Slicing in 5G for
Intelligent Vehicular Systems and Smart Cities [19.723551683930776]
We develop a network slicing model based on a cluster of fog nodes (FNs) coordinated with an edge controller (EC)
For each service request in a cluster, the EC decides which FN to execute the task, locally serve the request at the edge, or to reject the task and refer it to the cloud.
We propose a deep reinforcement learning (DRL) solution to adaptively learn the optimal slicing policy.
arXiv Detail & Related papers (2020-10-19T23:30:08Z) - Reconfigurable Intelligent Surface Assisted Multiuser MISO Systems
Exploiting Deep Reinforcement Learning [21.770491711632832]
The reconfigurable intelligent surface (RIS) has been speculated as one of the key enabling technologies for the future six generation (6G) wireless communication systems.
In this paper, we investigate the joint design of transmit beamforming matrix at the base station and the phase shift matrix at the RIS, by leveraging recent advances in deep reinforcement learning (DRL)
The proposed algorithm is not only able to learn from the environment and gradually improve its behavior, but also obtains the comparable performance compared with two state-of-the-art benchmarks.
arXiv Detail & Related papers (2020-02-24T04:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.