Deep-Reinforcement-Learning-Based Scheduling with Contiguous Resource
  Allocation for Next-Generation Cellular Systems
        - URL: http://arxiv.org/abs/2010.11269v2
 - Date: Thu, 26 Nov 2020 23:23:22 GMT
 - Title: Deep-Reinforcement-Learning-Based Scheduling with Contiguous Resource
  Allocation for Next-Generation Cellular Systems
 - Authors: Shu Sun, Xiaofeng Li
 - Abstract summary: We propose a novel scheduling algorithm with contiguous frequency-domain resource allocation (FDRA) based on deep reinforcement learning (DRL)
The proposed DRL-based scheduling algorithm outperforms other representative baseline schemes while having lower online computational complexity.
 - Score: 4.227387975627387
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Scheduling plays a pivotal role in multi-user wireless communications, since
the quality of service of various users largely depends upon the allocated
radio resources. In this paper, we propose a novel scheduling algorithm with
contiguous frequency-domain resource allocation (FDRA) based on deep
reinforcement learning (DRL) that jointly selects users and allocates resource
blocks (RBs). The scheduling problem is modeled as a Markov decision process,
and a DRL agent determines which user and how many consecutive RBs for that
user should be scheduled at each RB allocation step. The state space, action
space, and reward function are delicately designed to train the DRL network.
More specifically, the originally quasi-continuous action space, which is
inherent to contiguous FDRA, is refined into a finite and discrete action space
to obtain a trade-off between the inference latency and system performance.
Simulation results show that the proposed DRL-based scheduling algorithm
outperforms other representative baseline schemes while having lower online
computational complexity.
 
       
      
        Related papers
        - StreamRL: Scalable, Heterogeneous, and Elastic RL for LLMs with   Disaggregated Stream Generation [55.75008325187133]
Reinforcement learning (RL) has become the core post-training technique for large language models (LLMs)
StreamRL is designed with disaggregation from first principles to address two types of performance bottlenecks.
 Experiments show that StreamRL improves throughput by up to 2.66x compared to existing state-of-the-art systems.
arXiv  Detail & Related papers  (2025-04-22T14:19:06Z) - Intelligent Hybrid Resource Allocation in MEC-assisted RAN Slicing   Network [72.2456220035229]
We aim to maximize the SSR for heterogeneous service demands in the cooperative MEC-assisted RAN slicing system.
We propose a recurrent graph reinforcement learning (RGRL) algorithm to intelligently learn the optimal hybrid RA policy.
arXiv  Detail & Related papers  (2024-05-02T01:36:13Z) - Optimization of Image Transmission in a Cooperative Semantic
  Communication Networks [68.2233384648671]
A semantic communication framework for image transmission is developed.
Servers cooperatively transmit images to a set of users utilizing semantic communication techniques.
A multimodal metric is proposed to measure the correlation between the extracted semantic information and the original image.
arXiv  Detail & Related papers  (2023-01-01T15:59:13Z) - Decentralized Federated Reinforcement Learning for User-Centric Dynamic
  TFDD Control [37.54493447920386]
We propose a learning-based dynamic time-frequency division duplexing (D-TFDD) scheme to meet asymmetric and heterogeneous traffic demands.
We formulate the problem as a decentralized partially observable Markov decision process (Dec-POMDP)
In order to jointly optimize the global resources in a decentralized manner, we propose a federated reinforcement learning (RL) algorithm named Wolpertinger deep deterministic policy gradient (FWDDPG) algorithm.
arXiv  Detail & Related papers  (2022-11-04T07:39:21Z) - Effective Multi-User Delay-Constrained Scheduling with Deep Recurrent
  Reinforcement Learning [28.35473469490186]
Multi-user delay constrained scheduling is important in many real-world applications including wireless communication, live streaming, and cloud computing.
We propose a deep reinforcement learning (DRL) algorithm, named Recurrent Softmax Delayed Deep Double Deterministic Policy Gradient ($mathttRSD4$)
$mathttRSD4$ guarantees resource and delay constraints by Lagrangian dual and delay-sensitive queues, respectively.
It also efficiently tackles partial observability with a memory mechanism enabled by the recurrent neural network (RNN) and introduces user-level decomposition and node-level
arXiv  Detail & Related papers  (2022-08-30T08:44:15Z) - State-Augmented Learnable Algorithms for Resource Management in Wireless
  Networks [124.89036526192268]
We propose a state-augmented algorithm for solving resource management problems in wireless networks.
We show that the proposed algorithm leads to feasible and near-optimal RRM decisions.
arXiv  Detail & Related papers  (2022-07-05T18:02:54Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
  Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv  Detail & Related papers  (2022-06-13T02:19:20Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
  Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv  Detail & Related papers  (2021-11-23T18:24:47Z) - Smart Scheduling based on Deep Reinforcement Learning for Cellular
  Networks [18.04856086228028]
We propose a smart scheduling scheme based on deep reinforcement learning (DRL)
We provide implementation-friend designs, i.e., a scalable neural network design for the agent and a virtual environment training framework.
We show that the DRL-based smart scheduling outperforms the conventional scheduling method and can be adopted in practical systems.
arXiv  Detail & Related papers  (2021-03-22T02:09:16Z) - Deep Reinforcement Learning for Resource Constrained Multiclass
  Scheduling in Wireless Networks [0.0]
In our setup, the available limited bandwidth resources are allocated in order to serve randomly arriving service demands.
We propose a distributional Deep Deterministic Policy Gradient (DDPG) algorithm combined with Deep Sets to tackle the problem.
Our proposed algorithm is tested on both synthetic and real data, showing consistent gains against state-of-the-art conventional methods.
arXiv  Detail & Related papers  (2020-11-27T09:49:38Z) - Critic Regularized Regression [70.8487887738354]
We propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR)
We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces.
arXiv  Detail & Related papers  (2020-06-26T17:50:26Z) - Stacked Auto Encoder Based Deep Reinforcement Learning for Online
  Resource Scheduling in Large-Scale MEC Networks [44.40722828581203]
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users.
A deep reinforcement learning (DRL) based solution is proposed, which includes the following components.
A preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy.
arXiv  Detail & Related papers  (2020-01-24T23:01:15Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.