Reinforcement Learning for Dynamic Resource Optimization in 5G Radio
Access Network Slicing
- URL: http://arxiv.org/abs/2009.06579v1
- Date: Mon, 14 Sep 2020 17:10:17 GMT
- Title: Reinforcement Learning for Dynamic Resource Optimization in 5G Radio
Access Network Slicing
- Authors: Yi Shi, Yalin E. Sagduyu, Tugba Erpek
- Abstract summary: The paper presents a reinforcement learning solution to dynamic resource allocation for 5G radio network slicing.
Results show that reinforcement learning provides major improvements in the 5G network utility relative to myopic, random, and first come first served solutions.
- Score: 3.509171590450989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper presents a reinforcement learning solution to dynamic resource
allocation for 5G radio access network slicing. Available communication
resources (frequency-time blocks and transmit powers) and computational
resources (processor usage) are allocated to stochastic arrivals of network
slice requests. Each request arrives with priority (weight), throughput,
computational resource, and latency (deadline) requirements, and if feasible,
it is served with available communication and computational resources allocated
over its requested duration. As each decision of resource allocation makes some
of the resources temporarily unavailable for future, the myopic solution that
can optimize only the current resource allocation becomes ineffective for
network slicing. Therefore, a Q-learning solution is presented to maximize the
network utility in terms of the total weight of granted network slicing
requests over a time horizon subject to communication and computational
constraints. Results show that reinforcement learning provides major
improvements in the 5G network utility relative to myopic, random, and first
come first served solutions. While reinforcement learning sustains scalable
performance as the number of served users increases, it can also be effectively
used to assign resources to network slices when 5G needs to share the spectrum
with incumbent users that may dynamically occupy some of the frequency-time
blocks.
Related papers
- Adaptive Digital Twin and Communication-Efficient Federated Learning Network Slicing for 5G-enabled Internet of Things [8.11509914300497]
Network slicing enables industrial Internet of Things (IIoT) networks with multiservice and differentiated resource requirements to meet increasing demands through efficient use and management of network resources.
The new generation of Industry 4.0 has introduced digital twins to map physical systems to digital models for accurate decision-making.
In our approach, we first use graph-attention networks to build a digital twin environment for network slices, enabling real-time traffic analysis, monitoring, and demand forecasting.
arXiv Detail & Related papers (2024-06-22T15:33:35Z) - ORIENT: A Priority-Aware Energy-Efficient Approach for Latency-Sensitive
Applications in 6G [15.753216159980434]
Growing concerns about increased energy consumption in computing and networking.
The expected surge in connected devices and resource-demanding applications presents unprecedented challenges for energy resources.
We investigate the joint problem of service instance placement and assignment, path selection, and request prioritization, dubbed PIRA.
arXiv Detail & Related papers (2024-02-10T12:05:52Z) - Generative AI-enabled Quantum Computing Networks and Intelligent
Resource Allocation [80.78352800340032]
Quantum computing networks execute large-scale generative AI computation tasks and advanced quantum algorithms.
efficient resource allocation in quantum computing networks is a critical challenge due to qubit variability and network complexity.
We introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation.
arXiv Detail & Related papers (2024-01-13T17:16:38Z) - CLARA: A Constrained Reinforcement Learning Based Resource Allocation
Framework for Network Slicing [19.990451009223573]
Network slicing is proposed as a promising solution for resource utilization in 5G and future networks.
We formulate the problem as a Constrained Markov Decision Process (CMDP) without knowing models and hidden structures.
We propose to solve the problem using CLARA, a Constrained reinforcement LeArning based Resource Allocation algorithm.
arXiv Detail & Related papers (2021-11-16T11:54:09Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Edge Intelligence for Energy-efficient Computation Offloading and
Resource Allocation in 5G Beyond [7.953533529450216]
5G beyond is an end-edge-cloud orchestrated network that can exploit heterogeneous capabilities of the end devices, edge servers, and the cloud.
In multi user wireless networks, diverse application requirements and the possibility of various radio access modes for communication among devices make it challenging to design an optimal computation offloading scheme.
Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information.
arXiv Detail & Related papers (2020-11-17T05:51:03Z) - When Deep Reinforcement Learning Meets Federated Learning: Intelligent
Multi-Timescale Resource Management for Multi-access Edge Computing in 5G
Ultra Dense Network [31.274279003934268]
We first propose an intelligent ultra-dense edge computing (I-UDEC) framework, which integrates blockchain and AI into 5G edge computing networks.
In order to achieve real-time and low overhead computation offloading decisions and resource allocation strategies, we design a novel two-timescale deep reinforcement learning (textit2Ts-DRL) approach.
Our proposed algorithm can reduce task execution time up to 31.87%.
arXiv Detail & Related papers (2020-09-22T15:08:00Z) - Caching Placement and Resource Allocation for Cache-Enabling UAV NOMA
Networks [87.6031308969681]
This article investigates the cache-enabling unmanned aerial vehicle (UAV) cellular networks with massive access capability supported by non-orthogonal multiple access (NOMA)
We formulate the long-term caching placement and resource allocation optimization problem for content delivery delay minimization as a Markov decision process (MDP)
We propose a Q-learning based caching placement and resource allocation algorithm, where the UAV learns and selects action with emphsoft $varepsilon$-greedy strategy to search for the optimal match between actions and states.
arXiv Detail & Related papers (2020-08-12T08:33:51Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - Deep Learning for Radio Resource Allocation with Diverse
Quality-of-Service Requirements in 5G [53.23237216769839]
We develop a deep learning framework to approximate the optimal resource allocation policy for base stations.
We find that a fully-connected neural network (NN) cannot fully guarantee the requirements due to the approximation errors and quantization errors of the numbers of subcarriers.
Considering that the distribution of wireless channels and the types of services in the wireless networks are non-stationary, we apply deep transfer learning to update NNs in non-stationary wireless networks.
arXiv Detail & Related papers (2020-03-29T04:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.