Learning based E2E Energy Efficient in Joint Radio and NFV Resource
Allocation for 5G and Beyond Networks
- URL: http://arxiv.org/abs/2107.05991v1
- Date: Tue, 13 Jul 2021 11:19:48 GMT
- Title: Learning based E2E Energy Efficient in Joint Radio and NFV Resource
Allocation for 5G and Beyond Networks
- Authors: Narges Gholipoor, Ali Nouruzi, Shima Salarhosseini, Mohammad Reza
Javan, Nader Mokari, and Eduard A. Jorswieck
- Abstract summary: We formulate an optimization problem in which power and spectrum resources are allocated in the radio part.
In the core part, the chaining, placement, and scheduling of functions are performed to ensure the efficiency of all users.
A soft actor-critic deep learning (SAC-DRL) algorithm based on the maximum entropy framework is subsequently utilized to solve the above MDP.
- Score: 21.60295771932728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a joint radio and core resource allocation
framework for NFV-enabled networks. In the proposed system model, the goal is
to maximize energy efficiency (EE), by guaranteeing end-to-end (E2E) quality of
service (QoS) for different service types. To this end, we formulate an
optimization problem in which power and spectrum resources are allocated in the
radio part. In the core part, the chaining, placement, and scheduling of
functions are performed to ensure the QoS of all users. This joint optimization
problem is modeled as a Markov decision process (MDP), considering time-varying
characteristics of the available resources and wireless channels. A soft
actor-critic deep reinforcement learning (SAC-DRL) algorithm based on the
maximum entropy framework is subsequently utilized to solve the above MDP.
Numerical results reveal that the proposed joint approach based on the SAC-DRL
algorithm could significantly reduce energy consumption compared to the case in
which R-RA and NFV-RA problems are optimized separately.
Related papers
- Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - A State-Augmented Approach for Learning Optimal Resource Management
Decisions in Wireless Networks [58.720142291102135]
We consider a radio resource management (RRM) problem in a multi-user wireless network.
The goal is to optimize a network-wide utility function subject to constraints on the ergodic average performance of users.
We propose a state-augmented parameterization for the RRM policy, where alongside the instantaneous network states, the RRM policy takes as input the set of dual variables corresponding to the constraints.
arXiv Detail & Related papers (2022-10-28T21:24:13Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Optimal Power Allocation for Rate Splitting Communications with Deep
Reinforcement Learning [61.91604046990993]
This letter introduces a novel framework to optimize the power allocation for users in a Rate Splitting Multiple Access network.
In the network, messages intended for users are split into different parts that are a single common part and respective private parts.
arXiv Detail & Related papers (2021-07-01T06:32:49Z) - Energy Efficient Edge Computing: When Lyapunov Meets Distributed
Reinforcement Learning [12.845204986571053]
In this work, we study the problem of energy-efficient offloading enabled by edge computing.
In the considered scenario, multiple users simultaneously compete for radio and edge computing resources.
The proposed solution also allows to increase the network's energy efficiency compared to a benchmark approach.
arXiv Detail & Related papers (2021-03-31T11:02:29Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Deep Reinforcement Learning for QoS-Constrained Resource Allocation in
Multiservice Networks [0.3324986723090368]
This article focuses on a non- optimization problem whose main aim is to maximize the spectral efficiency to satisfaction guarantees in multiservice wireless systems.
We propose a solution based on a Reinforcement Learning (RL) framework, where each agent makes its decisions to find a policy by interacting with the local environment.
We show a near optimal performance of the latter in terms of throughput and outage rate.
arXiv Detail & Related papers (2020-03-03T19:32:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.