ColO-RAN: Developing Machine Learning-based xApps for Open RAN
Closed-loop Control on Programmable Experimental Platforms
- URL: http://arxiv.org/abs/2112.09559v1
- Date: Fri, 17 Dec 2021 15:14:22 GMT
- Title: ColO-RAN: Developing Machine Learning-based xApps for Open RAN
Closed-loop Control on Programmable Experimental Platforms
- Authors: Michele Polese, Leonardo Bonati, Salvatore D'Oro, Stefano Basagni,
Tommaso Melodia
- Abstract summary: ColO-RAN is the first publicly-available large-scale O-RAN testing framework with software-defined radios-in-the-loop.
ColO-RAN enables ML research at scale using O-RAN components, programmable base stations, and a " wireless data factory"
Extensive results from our first-of-its-kind large-scale evaluation highlight the benefits and challenges of DRL-based adaptive control.
- Score: 22.260874168813647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In spite of the new opportunities brought about by the Open RAN, advances in
ML-based network automation have been slow, mainly because of the
unavailability of large-scale datasets and experimental testing infrastructure.
This slows down the development and widespread adoption of Deep Reinforcement
Learning (DRL) agents on real networks, delaying progress in intelligent and
autonomous RAN control. In this paper, we address these challenges by proposing
practical solutions and software pipelines for the design, training, testing,
and experimental evaluation of DRL-based closed-loop control in the Open RAN.
We introduce ColO-RAN, the first publicly-available large-scale O-RAN testing
framework with software-defined radios-in-the-loop. Building on the scale and
computational capabilities of the Colosseum wireless network emulator, ColO-RAN
enables ML research at scale using O-RAN components, programmable base
stations, and a "wireless data factory". Specifically, we design and develop
three exemplary xApps for DRL-based control of RAN slicing, scheduling and
online model training, and evaluate their performance on a cellular network
with 7 softwarized base stations and 42 users. Finally, we showcase the
portability of ColO-RAN to different platforms by deploying it on Arena, an
indoor programmable testbed. Extensive results from our first-of-its-kind
large-scale evaluation highlight the benefits and challenges of DRL-based
adaptive control. They also provide insights on the development of wireless DRL
pipelines, from data analysis to the design of DRL agents, and on the tradeoffs
associated to training on a live RAN. ColO-RAN and the collected large-scale
dataset will be made publicly available to the research community.
Related papers
- SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Katakomba: Tools and Benchmarks for Data-Driven NetHack [52.0035089982277]
NetHack is known as the frontier of reinforcement learning research.
We argue that there are three major obstacles for adoption: resource-wise, implementation-wise, and benchmark-wise.
We develop an open-source library that provides workflow fundamentals familiar to the offline reinforcement learning community.
arXiv Detail & Related papers (2023-06-14T22:50:25Z) - Sparsity-Aware Intelligent Massive Random Access Control in Open RAN: A
Reinforcement Learning Based Approach [61.74489383629319]
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management.
reinforcement-learning (RL)-assisted scheme of closed-loop access control is proposed to preserve sparsity of access requests.
Deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces.
arXiv Detail & Related papers (2023-03-05T12:25:49Z) - Programmable and Customized Intelligence for Traffic Steering in 5G
Networks Using Open RAN Architectures [16.48682480842328]
5G and beyond mobile networks will support heterogeneous use cases at an unprecedented scale.
Such fine-grained control of the Radio Access Network (RAN) is not possible with the current cellular architecture.
We propose an open architecture with abstractions that enable closed-loop control and provide data-driven, and intelligent optimization of the RAN at the user level.
arXiv Detail & Related papers (2022-09-28T15:31:06Z) - Intelligent Closed-loop RAN Control with xApps in OpenRAN Gym [28.37831674645226]
We discuss how to design AI/ML solutions for the intelligent closed-loop control of the Open RAN.
We show how to embed these solutions into xApps instantiated on the O-RAN near-real-time RAN Intelligent Controller (RIC) through OpenRAN Gym.
arXiv Detail & Related papers (2022-08-31T14:09:12Z) - OpenRAN Gym: AI/ML Development, Data Collection, and Testing for O-RAN
on PAWR Platforms [28.37831674645226]
OpenRAN Gym is a unified, open, and O-RAN-compliant experimental toolbox for data collection, design, prototyping and testing of end-to-end data-driven control solutions.
OpenRAN Gym and its software components are open-source and publicly-available to the research community.
arXiv Detail & Related papers (2022-07-25T17:22:25Z) - MR-iNet Gym: Framework for Edge Deployment of Deep Reinforcement
Learning on Embedded Software Defined Radio [3.503370263836711]
We design and deploy deep reinforcement learning-based power control agents on the GPU embedded software defined radios (SDRs)
To prove feasibility, we consider the problem of distributed power control for code-division multiple access (DS-CDMA)-based LPI/D transceivers.
We train the power control DRL agents in this ns3-gym simulation environment in a scenario that replicates our hardware testbed.
arXiv Detail & Related papers (2022-04-09T16:28:43Z) - RLOps: Development Life-cycle of Reinforcement Learning Aided Open RAN [4.279828770269723]
This article introduces principles for machine learning (ML), in particular, reinforcement learning (RL) relevant for the Open RAN stack.
We provide a taxonomy of the challenges faced by ML/RL models throughout the development life-cycle.
We discuss all fundamental parts of RLOps, which include: model specification, development and distillation, production environment serving, operations monitoring, safety/security and data engineering platform.
arXiv Detail & Related papers (2021-11-12T22:57:09Z) - DriverGym: Democratising Reinforcement Learning for Autonomous Driving [75.91049219123899]
We propose DriverGym, an open-source environment for developing reinforcement learning algorithms for autonomous driving.
DriverGym provides access to more than 1000 hours of expert logged data and also supports reactive and data-driven agent behavior.
The performance of an RL policy can be easily validated on real-world data using our extensive and flexible closed-loop evaluation protocol.
arXiv Detail & Related papers (2021-11-12T11:47:08Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.