Deep Reinforcement Learning for Stochastic Computation Offloading in
Digital Twin Networks
- URL: http://arxiv.org/abs/2011.08430v2
- Date: Wed, 18 Nov 2020 02:42:44 GMT
- Title: Deep Reinforcement Learning for Stochastic Computation Offloading in
Digital Twin Networks
- Authors: Yueyue Dai (Member, IEEE), Ke Zhang, Sabita Maharjan (Senior Member,
IEEE), and Yan Zhang (Fellow, IEEE)
- Abstract summary: Digital Twin is a promising technology to empower the digital transformation of Industrial Internet of Things (IIoT)
We first propose a new paradigm Digital Twin Networks (DTN) to build network topology and the task arrival model in IIoT systems.
Then, we formulate the computation offloading and resource allocation problem to minimize the long-term energy efficiency.
- Score: 1.0509026467663467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid development of Industrial Internet of Things (IIoT) requires
industrial production towards digitalization to improve network efficiency.
Digital Twin is a promising technology to empower the digital transformation of
IIoT by creating virtual models of physical objects. However, the provision of
network efficiency in IIoT is very challenging due to resource-constrained
devices, stochastic tasks, and resources heterogeneity. Distributed resources
in IIoT networks can be efficiently exploited through computation offloading to
reduce energy consumption while enhancing data processing efficiency. In this
paper, we first propose a new paradigm Digital Twin Networks (DTN) to build
network topology and the stochastic task arrival model in IIoT systems. Then,
we formulate the stochastic computation offloading and resource allocation
problem to minimize the long-term energy efficiency. As the formulated problem
is a stochastic programming problem, we leverage Lyapunov optimization
technique to transform the original problem into a deterministic per-time slot
problem. Finally, we present Asynchronous Actor-Critic (AAC) algorithm to find
the optimal stochastic computation offloading policy. Illustrative results
demonstrate that our proposed scheme is able to significantly outperforms the
benchmarks.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Resource Efficient Asynchronous Federated Learning for Digital Twin Empowered IoT Network [29.895766751146155]
Digital twin (DT) can provide real-time status and dynamic topology mapping for Internet of Things (IoT) devices.
We develop a dynamic resource scheduling algorithm tailored for the asynchronous federated learning (FL)-based lightweight DT empowered IoT network.
Specifically, our approach aims to minimize a multi-objective function that encompasses both energy consumption and latency.
arXiv Detail & Related papers (2024-08-26T14:28:51Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Queue-aware Network Control Algorithm with a High Quantum Computing Readiness-Evaluated in Discrete-time Flow Simulator for Fat-Pipe Networks [0.0]
We introduce a resource reoccupation algorithm for traffic engineering in wide-area networks.
The proposed optimization algorithm changes traffic steering and resource allocation in case of overloaded transceivers.
We show that our newly introduced network simulator enables analyses of short-time effects like buffering within fat-pipe networks.
arXiv Detail & Related papers (2024-04-05T13:13:02Z) - Generative AI-enabled Quantum Computing Networks and Intelligent
Resource Allocation [80.78352800340032]
Quantum computing networks execute large-scale generative AI computation tasks and advanced quantum algorithms.
efficient resource allocation in quantum computing networks is a critical challenge due to qubit variability and network complexity.
We introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation.
arXiv Detail & Related papers (2024-01-13T17:16:38Z) - Digital Twin-Enhanced Deep Reinforcement Learning for Resource
Management in Networks Slicing [46.65030115953947]
We propose a framework consisting of a digital twin and reinforcement learning agents.
Specifically, we propose to use the historical data and the neural networks to build a digital twin model to simulate the state variation law of the real environment.
We also extend the framework to offline reinforcement learning, where solutions can be used to obtain intelligent decisions based solely on historical data.
arXiv Detail & Related papers (2023-11-28T15:25:14Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Energy Efficient Hardware Acceleration of Neural Networks with
Power-of-Two Quantisation [0.0]
We show that a hardware neural network accelerator with PoT weights implemented on the Zynq UltraScale + MPSoC ZCU104 FPGA can be at least $1.4x$ more energy efficient than the uniform quantisation version.
arXiv Detail & Related papers (2022-09-30T06:33:40Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.