Symbolic Reinforcement Learning for Safe RAN Control
- URL: http://arxiv.org/abs/2103.06602v1
- Date: Thu, 11 Mar 2021 10:56:49 GMT
- Title: Symbolic Reinforcement Learning for Safe RAN Control
- Authors: Alexandros Nikou, Anusha Mujumdar, Marin Orlic, Aneta Vulgarakis
Feljan
- Abstract summary: We show a Symbolic Reinforcement Learning (SRL) architecture for safe control in Radio Access Network (RAN) applications.
In our tool, a user can select a high-level safety specifications expressed in Linear Temporal Logic (LTL) to shield an RL agent running in a given cellular network.
We demonstrate the user interface (UI) helping the user set intent specifications to the architecture and inspect the difference in allowed and blocked actions.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we demonstrate a Symbolic Reinforcement Learning (SRL)
architecture for safe control in Radio Access Network (RAN) applications. In
our automated tool, a user can select a high-level safety specifications
expressed in Linear Temporal Logic (LTL) to shield an RL agent running in a
given cellular network with aim of optimizing network performance, as measured
through certain Key Performance Indicators (KPIs). In the proposed
architecture, network safety shielding is ensured through model-checking
techniques over combined discrete system models (automata) that are abstracted
through reinforcement learning. We demonstrate the user interface (UI) helping
the user set intent specifications to the architecture and inspect the
difference in allowed and blocked actions.
Related papers
- Approximate Model-Based Shielding for Safe Reinforcement Learning [83.55437924143615]
We propose a principled look-ahead shielding algorithm for verifying the performance of learned RL policies.
Our algorithm differs from other shielding approaches in that it does not require prior knowledge of the safety-relevant dynamics of the system.
We demonstrate superior performance to other safety-aware approaches on a set of Atari games with state-dependent safety-labels.
arXiv Detail & Related papers (2023-07-27T15:19:45Z) - Value Functions are Control Barrier Functions: Verification of Safe
Policies using Control Theory [46.85103495283037]
We propose a new approach to apply verification methods from control theory to learned value functions.
We formalize original theorems that establish links between value functions and control barrier functions.
Our work marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems.
arXiv Detail & Related papers (2023-06-06T21:41:31Z) - Sparsity-Aware Intelligent Massive Random Access Control in Open RAN: A
Reinforcement Learning Based Approach [61.74489383629319]
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management.
reinforcement-learning (RL)-assisted scheme of closed-loop access control is proposed to preserve sparsity of access requests.
Deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces.
arXiv Detail & Related papers (2023-03-05T12:25:49Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Safe-Critical Modular Deep Reinforcement Learning with Temporal Logic
through Gaussian Processes and Control Barrier Functions [3.5897534810405403]
Reinforcement learning (RL) is a promising approach and has limited success towards real-world applications.
In this paper, we propose a learning-based control framework consisting of several aspects.
We show such an ECBF-based modular deep RL algorithm achieves near-perfect success rates and guard safety with a high probability.
arXiv Detail & Related papers (2021-09-07T00:51:12Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Model-Based Safe Policy Search from Signal Temporal Logic Specifications
Using Recurrent Neural Networks [1.005130974691351]
We propose a policy search approach to learn controllers from specifications given as Signal Temporal Logic (STL) formulae.
The system model is unknown, and it is learned together with the control policy.
The results show that our approach can satisfy the given specification within very few system runs, and therefore it has the potential to be used for on-line control.
arXiv Detail & Related papers (2021-03-29T20:21:55Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.