Reinforcement learning for Admission Control in 5G Wireless Networks
- URL: http://arxiv.org/abs/2104.10761v1
- Date: Tue, 13 Apr 2021 06:37:18 GMT
- Title: Reinforcement learning for Admission Control in 5G Wireless Networks
- Authors: Youri Raaijmakers and Silvio Mandelli and Mark Doll
- Abstract summary: Key challenge in admission control in wireless networks is to strike an optimal trade-off between the blocking probability for new requests and the dropping probability of ongoing requests.
We consider two approaches for solving the admission control problem: i) the typically adopted threshold policy and ii) our proposed policy relying on reinforcement learning with neural networks.
- Score: 3.2345600015792564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The key challenge in admission control in wireless networks is to strike an
optimal trade-off between the blocking probability for new requests while
minimizing the dropping probability of ongoing requests. We consider two
approaches for solving the admission control problem: i) the typically adopted
threshold policy and ii) our proposed policy relying on reinforcement learning
with neural networks. Extensive simulation experiments are conducted to analyze
the performance of both policies. The results show that the reinforcement
learning policy outperforms the threshold-based policies in the scenario with
heterogeneous time-varying arrival rates and multiple user equipment types,
proving its applicability in realistic wireless network scenarios.
Related papers
- Differentiable Discrete Event Simulation for Queuing Network Control [7.965453961211742]
Queueing network control poses distinct challenges, including highity, large state and action spaces, and lack of stability.
We propose a scalable framework for policy optimization based on differentiable discrete event simulation.
Our methods can flexibly handle realistic scenarios, including systems operating in non-stationary environments.
arXiv Detail & Related papers (2024-09-05T17:53:54Z) - Intervention-Assisted Policy Gradient Methods for Online Stochastic Queuing Network Optimization: Technical Report [1.4201040196058878]
This work proposes Online Deep Reinforcement Learning-based Controls (ODRLC) as an alternative to traditional Deep Reinforcement Learning (DRL) methods.
ODRLC uses online interactions to learn optimal control policies for queuing networks (SQNs)
We introduce a method to design these intervention-assisted policies to ensure strong stability of the network.
arXiv Detail & Related papers (2024-04-05T14:02:04Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Closed-form congestion control via deep symbolic regression [1.5961908901525192]
Reinforcement Learning (RL) algorithms can handle challenges in ultra-low-latency and high throughput scenarios.
The adoption of neural network models in real deployments still poses some challenges regarding real-time inference and interpretability.
This paper proposes a methodology to deal with such challenges while maintaining the performance and generalization capabilities.
arXiv Detail & Related papers (2024-03-28T14:31:37Z) - Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - A State-Augmented Approach for Learning Optimal Resource Management
Decisions in Wireless Networks [58.720142291102135]
We consider a radio resource management (RRM) problem in a multi-user wireless network.
The goal is to optimize a network-wide utility function subject to constraints on the ergodic average performance of users.
We propose a state-augmented parameterization for the RRM policy, where alongside the instantaneous network states, the RRM policy takes as input the set of dual variables corresponding to the constraints.
arXiv Detail & Related papers (2022-10-28T21:24:13Z) - Mitigating Off-Policy Bias in Actor-Critic Methods with One-Step
Q-learning: A Novel Correction Approach [0.0]
We introduce a novel policy similarity measure to mitigate the effects of such discrepancy in continuous control.
Our method offers an adequate single-step off-policy correction that is applicable to deterministic policy networks.
arXiv Detail & Related papers (2022-08-01T11:33:12Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Offline Contextual Bandits for Wireless Network Optimization [107.24086150482843]
In this paper, we investigate how to learn policies that can automatically adjust the configuration parameters of every cell in the network in response to the changes in the user demand.
Our solution combines existent methods for offline learning and adapts them in a principled way to overcome crucial challenges arising in this context.
arXiv Detail & Related papers (2021-11-11T11:31:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.