Multi-agent reinforcement learning for intent-based service assurance in
cellular networks
- URL: http://arxiv.org/abs/2208.03740v1
- Date: Sun, 7 Aug 2022 14:42:58 GMT
- Title: Multi-agent reinforcement learning for intent-based service assurance in
cellular networks
- Authors: Satheesh K. Perepu, Jean P. Martins, Ricardo Souza S, Kaushik Dey
- Abstract summary: Multi-agent reinforcement learning (MARL) techniques have shown significant promise in many areas in which traditional closed-loop control falls short.
In this work, we propose a method based on MARL to achieve intent-based management without the requirement of the model of the underlying system.
- Score: 1.8352113484137629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, intent-based management is receiving good attention in telecom
networks owing to stringent performance requirements for many of the use cases.
Several approaches on the literature employ traditional methods in the telecom
domain to fulfill intents on the KPIs, which can be defined as a closed loop.
However, these methods consider every closed-loop independent of each other
which degrades the combined closed-loop performance. Also, when many closed
loops are needed, these methods are not easily scalable. Multi-agent
reinforcement learning (MARL) techniques have shown significant promise in many
areas in which traditional closed-loop control falls short, typically for
complex coordination and conflict management among loops. In this work, we
propose a method based on MARL to achieve intent-based management without the
requirement of the model of the underlying system. Moreover, when there are
conflicting intents, the MARL agents can implicitly incentivize the loops to
cooperate, without human interaction, by prioritizing the important KPIs.
Experiments have been performed on a network emulator on optimizing KPIs for
three services and we observe the proposed system performs well and is able to
fulfill all existing intents when there are enough resources or prioritize the
KPIs when there are scarce resources.
Related papers
- What If We Had Used a Different App? Reliable Counterfactual KPI Analysis in Wireless Systems [52.499838151272016]
This paper addresses the problem of estimating the values of traffic that would have been obtained if a different app had been implemented by the RAN.
We propose a conformal-prediction-based counterfactual analysis method for wireless systems.
arXiv Detail & Related papers (2024-09-30T18:47:26Z) - Causality-Driven Reinforcement Learning for Joint Communication and Sensing [4.165335263540595]
We propose a causally-aware RL agent which can intervene and discover causal relationships for mMIMO-based JCAS environments.
We use a state dependent action dimension selection strategy to realize causal discovery for RL-based JCAS.
arXiv Detail & Related papers (2024-09-07T07:15:57Z) - Design Optimization of NOMA Aided Multi-STAR-RIS for Indoor Environments: A Convex Approximation Imitated Reinforcement Learning Approach [51.63921041249406]
Non-orthogonal multiple access (NOMA) enables multiple users to share the same frequency band, and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)
deploying STAR-RIS indoors presents challenges in interference mitigation, power consumption, and real-time configuration.
A novel network architecture utilizing multiple access points (APs), STAR-RISs, and NOMA is proposed for indoor communication.
arXiv Detail & Related papers (2024-06-19T07:17:04Z) - Attention-based Open RAN Slice Management using Deep Reinforcement
Learning [6.177038245239758]
This paper introduces an innovative attention-based deep RL (ADRL) technique that leverages the O-RAN disaggregated modules and distributed agent cooperation.
Simulation results demonstrate significant improvements in network performance compared to other DRL baseline methods.
arXiv Detail & Related papers (2023-06-15T20:37:19Z) - Distributed-Training-and-Execution Multi-Agent Reinforcement Learning
for Power Control in HetNet [48.96004919910818]
We propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet.
To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems.
In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process.
arXiv Detail & Related papers (2022-12-15T17:01:56Z) - CLARA: A Constrained Reinforcement Learning Based Resource Allocation
Framework for Network Slicing [19.990451009223573]
Network slicing is proposed as a promising solution for resource utilization in 5G and future networks.
We formulate the problem as a Constrained Markov Decision Process (CMDP) without knowing models and hidden structures.
We propose to solve the problem using CLARA, a Constrained reinforcement LeArning based Resource Allocation algorithm.
arXiv Detail & Related papers (2021-11-16T11:54:09Z) - Deep Reinforcement Learning for Joint Spectrum and Power Allocation in
Cellular Networks [9.339885875216387]
Two separate deep reinforcement learning algorithms are designed to be executed and trained simultaneously to maximize a joint objective.
Results show that the proposed scheme outperforms both the state-of-the-art fractional programming algorithm and a previous solution based on deep reinforcement learning.
arXiv Detail & Related papers (2020-12-19T13:14:44Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.