Digital Twin Vehicular Edge Computing Network: Task Offloading and Resource Allocation
- URL: http://arxiv.org/abs/2407.11310v1
- Date: Tue, 16 Jul 2024 01:51:32 GMT
- Title: Digital Twin Vehicular Edge Computing Network: Task Offloading and Resource Allocation
- Authors: Yu Xie, Qiong Wu, Pingyi Fan,
- Abstract summary: We propose a multi-agent reinforcement learning method on the task offloading and resource allocation.
Numerous experiments demonstrate that our method is effective compared to other benchmark algorithms.
- Score: 14.436364625881183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing demand for multiple applications on internet of vehicles. It requires vehicles to carry out multiple computing tasks in real time. However, due to the insufficient computing capability of vehicles themselves, offloading tasks to vehicular edge computing (VEC) servers and allocating computing resources to tasks becomes a challenge. In this paper, a multi task digital twin (DT) VEC network is established. By using DT to develop offloading strategies and resource allocation strategies for multiple tasks of each vehicle in a single slot, an optimization problem is constructed. To solve it, we propose a multi-agent reinforcement learning method on the task offloading and resource allocation. Numerous experiments demonstrate that our method is effective compared to other benchmark algorithms.
Related papers
- Resource Allocation for Twin Maintenance and Computing Task Processing in Digital Twin Vehicular Edge Computing Network [48.15151800771779]
Vehicle edge computing (VEC) can provide computing caching services by deploying VEC servers near vehicles.
However, VEC networks still face challenges such as high vehicle mobility.
This study examines two types of delays caused by twin processing within the network.
arXiv Detail & Related papers (2024-07-10T12:08:39Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Knowledge-Driven Multi-Agent Reinforcement Learning for Computation
Offloading in Cybertwin-Enabled Internet of Vehicles [24.29177900273616]
We propose a knowledge-driven multi-agent reinforcement learning (KMARL) approach to reduce the latency of task offloading in cybertwin-enabled IoV.
Specifically, in the considered scenario, the cybertwin serves as a communication agent for each vehicle to exchange information and make offloading decisions in the virtual space.
arXiv Detail & Related papers (2023-08-04T09:11:37Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - DL-DRL: A double-level deep reinforcement learning approach for
large-scale task scheduling of multi-UAV [65.07776277630228]
We propose a double-level deep reinforcement learning (DL-DRL) approach based on a divide and conquer framework (DCF)
Particularly, we design an encoder-decoder structured policy network in our upper-level DRL model to allocate the tasks to different UAVs.
We also exploit another attention based policy network in our lower-level DRL model to construct the route for each UAV, with the objective to maximize the number of executed tasks.
arXiv Detail & Related papers (2022-08-04T04:35:53Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Learning Based Task Offloading in Digital Twin Empowered Internet of
Vehicles [22.088412340577896]
We propose a Digital Twin (DT) empowered task offloading framework for Internet of Vehicles.
As a software agent residing in the cloud, a DT can obtain both global network information by using communications among DTs.
We show that our algorithm can effectively find the optimal offloading strategy, as well as achieve the fast convergence speed and high performance.
arXiv Detail & Related papers (2021-12-28T08:24:56Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - Dynamic Task Weighting Methods for Multi-task Networks in Autonomous
Driving Systems [10.625400639764734]
Deep multi-task networks are of particular interest for autonomous driving systems.
We propose a novel method combining evolutionary meta-learning and task-based selective backpropagation.
Our method outperforms state-of-the-art methods by a significant margin on a two-task application.
arXiv Detail & Related papers (2020-01-07T18:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.