Digital Twin Assisted Deep Reinforcement Learning for Online Admission
Control in Sliced Network
- URL: http://arxiv.org/abs/2310.09299v3
- Date: Tue, 21 Nov 2023 07:34:26 GMT
- Title: Digital Twin Assisted Deep Reinforcement Learning for Online Admission
Control in Sliced Network
- Authors: Zhenyu Tao, Wei Xu, Xiaohu You
- Abstract summary: We propose a digital twin (DT) accelerated DRL solution to address this issue.
A neural network-based DT is established with a customized output layer for queuing systems, trained through supervised learning, and then employed to assist the training phase of the DRL model.
Extensive simulations show that the DT-accelerated DRL improves resource utilization by over 40% compared to the directly trained state-of-the-art dueling deep Q-learning model.
- Score: 19.152875040151976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of diverse wireless services in 5G and beyond has led to
the emergence of network slicing technologies. Among these, admission control
plays a crucial role in achieving service-oriented optimization goals through
the selective acceptance of service requests. Although deep reinforcement
learning (DRL) forms the foundation in many admission control approaches thanks
to its effectiveness and flexibility, initial instability with excessive
convergence delay of DRL models hinders their deployment in real-world
networks. We propose a digital twin (DT) accelerated DRL solution to address
this issue. Specifically, we first formulate the admission decision-making
process as a semi-Markov decision process, which is subsequently simplified
into an equivalent discrete-time Markov decision process to facilitate the
implementation of DRL methods. A neural network-based DT is established with a
customized output layer for queuing systems, trained through supervised
learning, and then employed to assist the training phase of the DRL model.
Extensive simulations show that the DT-accelerated DRL improves resource
utilization by over 40% compared to the directly trained state-of-the-art
dueling deep Q-learning model. This improvement is achieved while preserving
the model's capability to optimize the long-term rewards of the admission
process.
Related papers
- DRL Optimization Trajectory Generation via Wireless Network Intent-Guided Diffusion Models for Optimizing Resource Allocation [58.62766376631344]
We propose a customized wireless network intent (WNI-G) model to address different state variations of wireless communication networks.
Extensive simulation achieves greater stability in spectral efficiency and variations of traditional DRL models in dynamic communication systems.
arXiv Detail & Related papers (2024-10-18T14:04:38Z) - Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II [52.083337333478674]
This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW)
The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL.
arXiv Detail & Related papers (2024-07-18T02:46:06Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP [62.81300791178381]
The bottleneck of distributed edge learning over wireless has shifted from computing to communication.
Existing TCP-based data networking schemes for DEL are application-agnostic and fail to deliver adjustments according to application layer requirements.
We develop a hybrid multipath TCP (MP TCP) by combining model-based and deep reinforcement learning (DRL) based MP TCP for DEL.
arXiv Detail & Related papers (2022-11-03T09:08:30Z) - DL-DRL: A double-level deep reinforcement learning approach for
large-scale task scheduling of multi-UAV [65.07776277630228]
We propose a double-level deep reinforcement learning (DL-DRL) approach based on a divide and conquer framework (DCF)
Particularly, we design an encoder-decoder structured policy network in our upper-level DRL model to allocate the tasks to different UAVs.
We also exploit another attention based policy network in our lower-level DRL model to construct the route for each UAV, with the objective to maximize the number of executed tasks.
arXiv Detail & Related papers (2022-08-04T04:35:53Z) - Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence [76.96698721128406]
Mobile edge computing (MEC) considered a novel paradigm for computation and delay-sensitive tasks in fifth generation (5G) networks and beyond.
This paper provides a comprehensive research review on free-enabled RL and offers insight for development.
arXiv Detail & Related papers (2022-01-27T10:02:54Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - DRL-based Slice Placement under Realistic Network Load Conditions [0.8459686722437155]
We propose a network slice placement optimization solution based on Deep Reinforcement Learning (DRL)
The solution is adapted to networks with large scale and under non-stationary traffic conditions (namely, the network load)
We demonstrate the applicability of the proposed solution and its higher and stable performance over a non-controlled DRL-based solution.
arXiv Detail & Related papers (2021-09-27T07:58:45Z) - Boosting the Convergence of Reinforcement Learning-based Auto-pruning
Using Historical Data [35.36703623383735]
Reinforcement learning (RL)-based auto-pruning has been proposed to automate the pruning process to avoid expensive hand-crafted work.
However, the RL-based pruner involves a time-consuming training process and the high expense of each sample further exacerbates this problem.
We propose an efficient auto-pruning framework which solves this problem by taking advantage of the historical data from the previous auto-pruning process.
arXiv Detail & Related papers (2021-07-16T07:17:26Z) - Smart Scheduling based on Deep Reinforcement Learning for Cellular
Networks [18.04856086228028]
We propose a smart scheduling scheme based on deep reinforcement learning (DRL)
We provide implementation-friend designs, i.e., a scalable neural network design for the agent and a virtual environment training framework.
We show that the DRL-based smart scheduling outperforms the conventional scheduling method and can be adopted in practical systems.
arXiv Detail & Related papers (2021-03-22T02:09:16Z) - Stacked Auto Encoder Based Deep Reinforcement Learning for Online
Resource Scheduling in Large-Scale MEC Networks [44.40722828581203]
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users.
A deep reinforcement learning (DRL) based solution is proposed, which includes the following components.
A preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy.
arXiv Detail & Related papers (2020-01-24T23:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.