Revenue and Energy Efficiency-Driven Delay Constrained Computing Task
Offloading and Resource Allocation in a Vehicular Edge Computing Network: A
Deep Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2010.08119v1
- Date: Fri, 16 Oct 2020 02:45:05 GMT
- Title: Revenue and Energy Efficiency-Driven Delay Constrained Computing Task
Offloading and Resource Allocation in a Vehicular Edge Computing Network: A
Deep Reinforcement Learning Approach
- Authors: Xinyu Huang, Lijun He, Xing Chen, Liejun Wang, Fan Li
- Abstract summary: Joint impact of task type and vehicle speed on the task delay constraint has not been studied.
We propose a joint task type and vehicle speed-aware task offloading and resource allocation strategy.
Our algorithm can achieve superior performance in task completion delay, vehicles' energy cost and processing revenue.
- Score: 13.400466824558915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For in-vehicle application,task type and vehicle state information, i.e.,
vehicle speed, bear a significant impact on the task delay requirement.
However, the joint impact of task type and vehicle speed on the task delay
constraint has not been studied, and this lack of study may cause a mismatch
between the requirement of the task delay and allocated computation and
wireless resources. In this paper, we propose a joint task type and vehicle
speed-aware task offloading and resource allocation strategy to decrease the
vehicl's energy cost for executing tasks and increase the revenue of the
vehicle for processing tasks within the delay constraint. First, we establish
the joint task type and vehicle speed-aware delay constraint model. Then, the
delay, energy cost and revenue for task execution in the vehicular edge
computing (VEC) server, local terminal and terminals of other vehicles are
calculated. Based on the energy cost and revenue from task execution,the
utility function of the vehicle is acquired. Next, we formulate a joint
optimization of task offloading and resource allocation to maximize the utility
level of the vehicles subject to the constraints of task delay, computation
resources and wireless resources. To obtain a near-optimal solution of the
formulated problem, a joint offloading and resource allocation based on the
multi-agent deep deterministic policy gradient (JORA-MADDPG) algorithm is
proposed to maximize the utility level of vehicles. Simulation results show
that our algorithm can achieve superior performance in task completion delay,
vehicles' energy cost and processing revenue.
Related papers
- Diffusion-based Auction Mechanism for Efficient Resource Management in 6G-enabled Vehicular Metaverses [57.010829427434516]
In 6G-enable Vehicular Metaverses, vehicles are represented by Vehicle Twins (VTs), which serve as digital replicas of physical vehicles.
VT tasks are resource-intensive and need to be offloaded to ground Base Stations (BSs) for fast processing.
We propose a learning-based Modified Second-Bid (MSB) auction mechanism to optimize resource allocation between ground BSs and UAVs.
arXiv Detail & Related papers (2024-11-01T04:34:54Z) - Computation Pre-Offloading for MEC-Enabled Vehicular Networks via Trajectory Prediction [38.493882483362135]
We present a Trajectory Prediction-based Pre-offloading Decision (TPPD) algorithm for analyzing the historical trajectories of vehicles.
We devise a dynamic resource allocation algorithm using a Double Deep Q-Network (DDQN) that enables the edge server to minimize task processing delay.
arXiv Detail & Related papers (2024-09-26T09:46:43Z) - DRL-Based Federated Self-Supervised Learning for Task Offloading and Resource Allocation in ISAC-Enabled Vehicle Edge Computing [28.47670676456068]
Vehicle Edge Computing (VEC) addresses this by offloading tasks to Road Side Unit (RSU)
Our improved algorithm offloads partial task to RSU and optimize energy consumption by adjusting transmission power, CPU frequency, and task assignment ratios.
Simulation results show that the enhanced algorithm reduces energy consumption, improves offloading efficiency and the accuracy of Federated SSL.
arXiv Detail & Related papers (2024-08-27T07:28:05Z) - Digital Twin Vehicular Edge Computing Network: Task Offloading and Resource Allocation [14.436364625881183]
We propose a multi-agent reinforcement learning method on the task offloading and resource allocation.
Numerous experiments demonstrate that our method is effective compared to other benchmark algorithms.
arXiv Detail & Related papers (2024-07-16T01:51:32Z) - Resource Allocation for Twin Maintenance and Computing Task Processing in Digital Twin Vehicular Edge Computing Network [48.15151800771779]
Vehicle edge computing (VEC) can provide computing caching services by deploying VEC servers near vehicles.
However, VEC networks still face challenges such as high vehicle mobility.
This study examines two types of delays caused by twin processing within the network.
arXiv Detail & Related papers (2024-07-10T12:08:39Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Roulette-Wheel Selection-Based PSO Algorithm for Solving the Vehicle
Routing Problem with Time Windows [58.891409372784516]
This paper presents a novel form of the PSO methodology that uses the Roulette Wheel Method (RWPSO)
Experiments using the Solomon VRPTW benchmark datasets on the RWPSO demonstrate that RWPSO is competitive with other state-of-the-art algorithms from the literature.
arXiv Detail & Related papers (2023-06-04T09:18:02Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv Detail & Related papers (2022-06-13T02:19:20Z) - Learning Based Task Offloading in Digital Twin Empowered Internet of
Vehicles [22.088412340577896]
We propose a Digital Twin (DT) empowered task offloading framework for Internet of Vehicles.
As a software agent residing in the cloud, a DT can obtain both global network information by using communications among DTs.
We show that our algorithm can effectively find the optimal offloading strategy, as well as achieve the fast convergence speed and high performance.
arXiv Detail & Related papers (2021-12-28T08:24:56Z) - Deep Reinforcement Learning for Delay-Oriented IoT Task Scheduling in
Space-Air-Ground Integrated Network [24.022108191145527]
We investigate a computing task scheduling problem in space-air-ground integrated network (SAGIN) for delay-oriented Internet of Things (IoT) services.
In the considered scenario, an unmanned aerial vehicle (UAV) collects computing tasks from IoT devices and then makes online offloading decisions.
Our objective is to design a task scheduling policy that minimizes offloading and computing delay of all tasks given the UAV energy capacity constraint.
arXiv Detail & Related papers (2020-10-04T02:58:03Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.