Near-Real-Time Resource Slicing for QoS Optimization in 5G O-RAN using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2509.14343v1
- Date: Wed, 17 Sep 2025 18:20:04 GMT
- Title: Near-Real-Time Resource Slicing for QoS Optimization in 5G O-RAN using Deep Reinforcement Learning
- Authors: Peihao Yan, Jie Lu, Huacheng Zeng, Y. Thomas Hou,
- Abstract summary: This paper presents an xApp called xSlice for the Near-Real-Time (Near-RT) RAN Intelligent Controller (RIC) of 5G O-RANs.<n>xSlice is an online learning algorithm that adaptively adjusts MAC-layer resource allocation in response to dynamic network states.<n> Experimental results show that xSlice can reduce performance regret by 67% compared to the state-of-the-art solutions.
- Score: 19.02610605908148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-Radio Access Network (O-RAN) has become an important paradigm for 5G and beyond radio access networks. This paper presents an xApp called xSlice for the Near-Real-Time (Near-RT) RAN Intelligent Controller (RIC) of 5G O-RANs. xSlice is an online learning algorithm that adaptively adjusts MAC-layer resource allocation in response to dynamic network states, including time-varying wireless channel conditions, user mobility, traffic fluctuations, and changes in user demand. To address these network dynamics, we first formulate the Quality-of-Service (QoS) optimization problem as a regret minimization problem by quantifying the QoS demands of all traffic sessions through weighting their throughput, latency, and reliability. We then develop a deep reinforcement learning (DRL) framework that utilizes an actor-critic model to combine the advantages of both value-based and policy-based updating methods. A graph convolutional network (GCN) is incorporated as a component of the DRL framework for graph embedding of RAN data, enabling xSlice to handle a dynamic number of traffic sessions. We have implemented xSlice on an O-RAN testbed with 10 smartphones and conducted extensive experiments to evaluate its performance in realistic scenarios. Experimental results show that xSlice can reduce performance regret by 67% compared to the state-of-the-art solutions. Source code is available on GitHub [1].
Related papers
- Large Language Model (LLM)-enabled Reinforcement Learning for Wireless Network Optimization [79.27012080083603]
Large language models (LLMs) offer promising tools to enhance reinforcement learning in wireless networks.<n>We propose an LLM-assisted state representation and semantic extraction to enhance the multi-agent reinforcement learning framework.
arXiv Detail & Related papers (2026-01-15T01:42:39Z) - AI/ML Life Cycle Management for Interoperable AI Native RAN [50.61227317567369]
Artificial intelligence (AI) and machine learning (ML) models are rapidly permeating the 5G Radio Access Network (RAN)<n>These developments lay the foundation for AI-native transceivers as a key enabler for 6G.
arXiv Detail & Related papers (2025-07-24T16:04:59Z) - Open RAN LSTM Traffic Prediction and Slice Management using Deep
Reinforcement Learning [7.473473066047965]
This paper introduces a novel approach to ORAN slicing using distributed deep reinforcement learning (DDRL)
Simulation results demonstrate significant improvements in network performance, particularly in reducing violations.
This emphasizes the importance of using the prediction rApp and distributed actors' information jointly as part of a dynamic xApp.
arXiv Detail & Related papers (2024-01-12T22:43:07Z) - Generalizable Resource Scaling of 5G Slices using Constrained
Reinforcement Learning [2.0024258465343268]
Network slicing is a key enabler for 5G to support various applications.
It is imperative that the 5G infrastructure provider (InP) allocates the right amount of resources depending on the slice's traffic.
arXiv Detail & Related papers (2023-06-15T17:16:34Z) - Programmable and Customized Intelligence for Traffic Steering in 5G
Networks Using Open RAN Architectures [16.48682480842328]
5G and beyond mobile networks will support heterogeneous use cases at an unprecedented scale.
Such fine-grained control of the Radio Access Network (RAN) is not possible with the current cellular architecture.
We propose an open architecture with abstractions that enable closed-loop control and provide data-driven, and intelligent optimization of the RAN at the user level.
arXiv Detail & Related papers (2022-09-28T15:31:06Z) - Federated Meta-Learning for Traffic Steering in O-RAN [1.400970992993106]
We propose an algorithm for RAT allocation based on federated meta-learning (FML)
We have designed a simulation environment which contains LTE and 5G NR service technologies.
arXiv Detail & Related papers (2022-09-13T10:39:41Z) - Artificial Intelligence Empowered Multiple Access for Ultra Reliable and
Low Latency THz Wireless Networks [76.89730672544216]
Terahertz (THz) wireless networks are expected to catalyze the beyond fifth generation (B5G) era.
To satisfy the ultra-reliability and low-latency demands of several B5G applications, novel mobility management approaches are required.
This article presents a holistic MAC layer approach that enables intelligent user association and resource allocation, as well as flexible and adaptive mobility management.
arXiv Detail & Related papers (2022-08-17T03:00:24Z) - Implementing Reinforcement Learning Datacenter Congestion Control in NVIDIA NICs [64.26714148634228]
congestion control (CC) algorithms become extremely difficult to design.
It is currently not possible to deploy AI models on network devices due to their limited computational capabilities.
We build a computationally-light solution based on a recent reinforcement learning CC algorithm.
arXiv Detail & Related papers (2022-07-05T20:42:24Z) - Real-Time GPU-Accelerated Machine Learning Based Multiuser Detection for
5G and Beyond [70.81551587109833]
nonlinear beamforming filters can significantly outperform linear approaches in stationary scenarios with massive connectivity.
One of the main challenges comes from the real-time implementation of these algorithms.
This paper explores the acceleration of APSM-based algorithms through massive parallelization.
arXiv Detail & Related papers (2022-01-13T15:20:45Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Deep Reinforcement Learning for Adaptive Network Slicing in 5G for
Intelligent Vehicular Systems and Smart Cities [19.723551683930776]
We develop a network slicing model based on a cluster of fog nodes (FNs) coordinated with an edge controller (EC)
For each service request in a cluster, the EC decides which FN to execute the task, locally serve the request at the edge, or to reject the task and refer it to the cloud.
We propose a deep reinforcement learning (DRL) solution to adaptively learn the optimal slicing policy.
arXiv Detail & Related papers (2020-10-19T23:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.