Deep reinforcement learning for RAN optimization and control
- URL: http://arxiv.org/abs/2011.04607v2
- Date: Tue, 19 Jan 2021 16:47:01 GMT
- Title: Deep reinforcement learning for RAN optimization and control
- Authors: Yu Chen, Jie Chen, Ganesh Krishnamurthi, Huijing Yang, Huahui Wang,
Wenjie Zhao
- Abstract summary: We aim to build an intelligent controller without strong assumption or domain knowledge about the RAN.
We first build a closed-loop control testbed RAN in a lab environment with one eNodeB provided by one of the largest wireless vendors.
Next, we build a double Q network agent trained with the live feedback of the key performance indicators from the RAN.
- Score: 6.964699504779571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the high variability of the traffic in the radio access network (RAN),
fixed network configurations are not flexible enough to achieve optimal
performance. Our vendors provide several settings of the eNodeB to optimize the
RAN performance, such as media access control scheduler, loading balance, etc.
But the detailed mechanisms of the eNodeB configurations are usually very
complicated and not disclosed, not to mention the large key performance
indicators (KPIs) space needed to be considered. These make constructing a
simulator, offline tuning, or rule-based solutions difficult. We aim to build
an intelligent controller without strong assumption or domain knowledge about
the RAN and can run 24/7 without supervision. To achieve this goal, we first
build a closed-loop control testbed RAN in a lab environment with one eNodeB
provided by one of the largest wireless vendors and four smartphones. Next, we
build a double Q network agent trained with the live feedback of the key
performance indicators from the RAN. Our work proved the effectiveness of
applying deep reinforcement learning to improve network performance in a real
RAN network environment.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Intelligent Load Balancing and Resource Allocation in O-RAN: A
Multi-Agent Multi-Armed Bandit Approach [4.834203844100679]
We propose a multi-agent multi-armed bandit for load balancing and resource allocation (mmLBRA) scheme.
We also present the mmLBRA-LB and mmLBRA-RA sub-schemes that can operate independently in non-realtime RAN intelligent controller (Non-RT RIC) and near-RT RIC, respectively.
arXiv Detail & Related papers (2023-03-25T04:42:30Z) - Intelligent O-RAN Traffic Steering for URLLC Through Deep Reinforcement
Learning [3.59419219139168]
Open RAN (O-RAN) is a promising paradigm for building an intelligent RAN architecture.
This paper presents a Machine Learning (ML)-based Traffic Steering (TS) scheme to predict network congestion and then steer O-RAN traffic to avoid it and reduce the expected delay.
Our solution is evaluated against traditional reactive TS approaches that are offered as xApps in O-RAN and shows an average of 15.81 percent decrease in queuing delay across all deployed SFCs.
arXiv Detail & Related papers (2023-03-03T14:34:25Z) - Network-Aided Intelligent Traffic Steering in 6G O-RAN: A Multi-Layer
Optimization Framework [47.57576667752444]
We jointly optimize the flow-split distribution, congestion control and scheduling (JFCS) to enable an intelligent steering application in open RAN (O-RAN)
Our main contributions are three-fold: i) we propose the novel JFCS framework to efficiently and adaptively direct traffic to appropriate radio units; ii) we develop low-complexity algorithms based on the reinforcement learning, inner approximation and bisection search methods to effectively solve the JFCS problem in different time scales; and iv) the rigorous theoretical performance results are analyzed to show that there exists a scaling factor to improve the tradeoff between delay and utility-optimization
arXiv Detail & Related papers (2023-02-06T11:37:06Z) - Programmable and Customized Intelligence for Traffic Steering in 5G
Networks Using Open RAN Architectures [16.48682480842328]
5G and beyond mobile networks will support heterogeneous use cases at an unprecedented scale.
Such fine-grained control of the Radio Access Network (RAN) is not possible with the current cellular architecture.
We propose an open architecture with abstractions that enable closed-loop control and provide data-driven, and intelligent optimization of the RAN at the user level.
arXiv Detail & Related papers (2022-09-28T15:31:06Z) - Pervasive Machine Learning for Smart Radio Environments Enabled by
Reconfigurable Intelligent Surfaces [56.35676570414731]
The emerging technology of Reconfigurable Intelligent Surfaces (RISs) is provisioned as an enabler of smart wireless environments.
RISs offer a highly scalable, low-cost, hardware-efficient, and almost energy-neutral solution for dynamic control of the propagation of electromagnetic signals over the wireless medium.
One of the major challenges with the envisioned dense deployment of RISs in such reconfigurable radio environments is the efficient configuration of multiple metasurfaces.
arXiv Detail & Related papers (2022-05-08T06:21:33Z) - OrchestRAN: Network Automation through Orchestrated Intelligence in the
Open RAN [27.197110488665157]
We present and prototyping OrchestRAN, a novel orchestration framework for network intelligence.
OrchestRAN has been designed to execute in the non-real-time RAN Intelligent Controller (RIC) and allows Network Operators (NOs) to specify high-level control/inference objectives.
We show that the problem of orchestrating intelligence in Open RAN is NP-hard, and design low-complexity solutions to support real-world applications.
arXiv Detail & Related papers (2022-01-14T19:20:34Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Cognitive Radio Network Throughput Maximization with Deep Reinforcement
Learning [58.44609538048923]
Radio Frequency powered Cognitive Radio Networks (RF-CRN) are likely to be the eyes and ears of upcoming modern networks such as Internet of Things (IoT)
To be considered autonomous, the RF-powered network entities need to make decisions locally to maximize the network throughput under the uncertainty of any network environment.
In this paper, deep reinforcement learning is proposed to overcome the shortcomings and allow a wireless gateway to derive an optimal policy to maximize network throughput.
arXiv Detail & Related papers (2020-07-07T01:49:07Z) - Wireless Power Control via Counterfactual Optimization of Graph Neural
Networks [124.89036526192268]
We consider the problem of downlink power control in wireless networks, consisting of multiple transmitter-receiver pairs communicating over a single shared wireless medium.
To mitigate the interference among concurrent transmissions, we leverage the network topology to create a graph neural network architecture.
We then use an unsupervised primal-dual counterfactual optimization approach to learn optimal power allocation decisions.
arXiv Detail & Related papers (2020-02-17T07:54:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.