Evolutionary Deep Reinforcement Learning for Dynamic Slice Management in
O-RAN
- URL: http://arxiv.org/abs/2208.14394v1
- Date: Tue, 30 Aug 2022 17:00:53 GMT
- Title: Evolutionary Deep Reinforcement Learning for Dynamic Slice Management in
O-RAN
- Authors: Fatemeh Lotfi, Omid Semiari, Fatemeh Afghah
- Abstract summary: New open radio access network (O-RAN) with distinguishing features such as flexible design, disaggregated virtual and programmable components, and intelligent closed-loop control was developed.
O-RAN slicing is being investigated as a critical strategy for ensuring network quality of service (QoS) in the face of changing circumstances.
This paper introduces a novel framework able to manage the network slices through provisioned resources intelligently.
- Score: 11.464582983164991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The next-generation wireless networks are required to satisfy a variety of
services and criteria concurrently. To address upcoming strict criteria, a new
open radio access network (O-RAN) with distinguishing features such as flexible
design, disaggregated virtual and programmable components, and intelligent
closed-loop control was developed. O-RAN slicing is being investigated as a
critical strategy for ensuring network quality of service (QoS) in the face of
changing circumstances. However, distinct network slices must be dynamically
controlled to avoid service level agreement (SLA) variation caused by rapid
changes in the environment. Therefore, this paper introduces a novel framework
able to manage the network slices through provisioned resources intelligently.
Due to diverse heterogeneous environments, intelligent machine learning
approaches require sufficient exploration to handle the harshest situations in
a wireless network and accelerate convergence. To solve this problem, a new
solution is proposed based on evolutionary-based deep reinforcement learning
(EDRL) to accelerate and optimize the slice management learning process in the
radio access network's (RAN) intelligent controller (RIC) modules. To this end,
the O-RAN slicing is represented as a Markov decision process (MDP) which is
then solved optimally for resource allocation to meet service demand using the
EDRL approach. In terms of reaching service demands, simulation results show
that the proposed approach outperforms the DRL baseline by 62.2%.
Related papers
- DRL Optimization Trajectory Generation via Wireless Network Intent-Guided Diffusion Models for Optimizing Resource Allocation [58.62766376631344]
We propose a customized wireless network intent (WNI-G) model to address different state variations of wireless communication networks.
Extensive simulation achieves greater stability in spectral efficiency and variations of traditional DRL models in dynamic communication systems.
arXiv Detail & Related papers (2024-10-18T14:04:38Z) - Meta Reinforcement Learning Approach for Adaptive Resource Optimization in O-RAN [6.326120268549892]
Open Radio Access Network (O-RAN) addresses the variable demands of modern networks with unprecedented efficiency and adaptability.
This paper proposes a novel Meta Deep Reinforcement Learning (Meta-DRL) strategy, inspired by Model-Agnostic Meta-Learning (MAML) to advance resource block and downlink power allocation in O-RAN.
arXiv Detail & Related papers (2024-09-30T23:04:30Z) - Open RAN LSTM Traffic Prediction and Slice Management using Deep
Reinforcement Learning [7.473473066047965]
This paper introduces a novel approach to ORAN slicing using distributed deep reinforcement learning (DDRL)
Simulation results demonstrate significant improvements in network performance, particularly in reducing violations.
This emphasizes the importance of using the prediction rApp and distributed actors' information jointly as part of a dynamic xApp.
arXiv Detail & Related papers (2024-01-12T22:43:07Z) - Attention-based Open RAN Slice Management using Deep Reinforcement
Learning [6.177038245239758]
This paper introduces an innovative attention-based deep RL (ADRL) technique that leverages the O-RAN disaggregated modules and distributed agent cooperation.
Simulation results demonstrate significant improvements in network performance compared to other DRL baseline methods.
arXiv Detail & Related papers (2023-06-15T20:37:19Z) - Intelligent O-RAN Traffic Steering for URLLC Through Deep Reinforcement
Learning [3.59419219139168]
Open RAN (O-RAN) is a promising paradigm for building an intelligent RAN architecture.
This paper presents a Machine Learning (ML)-based Traffic Steering (TS) scheme to predict network congestion and then steer O-RAN traffic to avoid it and reduce the expected delay.
Our solution is evaluated against traditional reactive TS approaches that are offered as xApps in O-RAN and shows an average of 15.81 percent decrease in queuing delay across all deployed SFCs.
arXiv Detail & Related papers (2023-03-03T14:34:25Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Artificial Intelligence Empowered Multiple Access for Ultra Reliable and
Low Latency THz Wireless Networks [76.89730672544216]
Terahertz (THz) wireless networks are expected to catalyze the beyond fifth generation (B5G) era.
To satisfy the ultra-reliability and low-latency demands of several B5G applications, novel mobility management approaches are required.
This article presents a holistic MAC layer approach that enables intelligent user association and resource allocation, as well as flexible and adaptive mobility management.
arXiv Detail & Related papers (2022-08-17T03:00:24Z) - State-Augmented Learnable Algorithms for Resource Management in Wireless
Networks [124.89036526192268]
We propose a state-augmented algorithm for solving resource management problems in wireless networks.
We show that the proposed algorithm leads to feasible and near-optimal RRM decisions.
arXiv Detail & Related papers (2022-07-05T18:02:54Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Deep Reinforcement Learning for Adaptive Network Slicing in 5G for
Intelligent Vehicular Systems and Smart Cities [19.723551683930776]
We develop a network slicing model based on a cluster of fog nodes (FNs) coordinated with an edge controller (EC)
For each service request in a cluster, the EC decides which FN to execute the task, locally serve the request at the edge, or to reject the task and refer it to the cloud.
We propose a deep reinforcement learning (DRL) solution to adaptively learn the optimal slicing policy.
arXiv Detail & Related papers (2020-10-19T23:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.