Reinforcement Learning-based Dynamic Service Placement in Vehicular
Networks
- URL: http://arxiv.org/abs/2105.15022v2
- Date: Tue, 1 Jun 2021 13:38:15 GMT
- Title: Reinforcement Learning-based Dynamic Service Placement in Vehicular
Networks
- Authors: Anum Talpur and Mohan Gurusamy
- Abstract summary: complexity of traffic mobility patterns and dynamics in the requests for different types of services has made service placement a challenging task.
A typical static placement solution is not effective as it does not consider the traffic mobility and service dynamics.
We propose a reinforcement learning-based dynamic (RL-Dynamic) service placement framework to find the optimal placement of services at the edge servers.
- Score: 4.010371060637208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of technologies such as 5G and mobile edge computing has
enabled provisioning of different types of services with different resource and
service requirements to the vehicles in a vehicular network.The growing
complexity of traffic mobility patterns and dynamics in the requests for
different types of services has made service placement a challenging task. A
typical static placement solution is not effective as it does not consider the
traffic mobility and service dynamics. In this paper, we propose a
reinforcement learning-based dynamic (RL-Dynamic) service placement framework
to find the optimal placement of services at the edge servers while considering
the vehicle's mobility and dynamics in the requests for different types of
services. We use SUMO and MATLAB to carry out simulation experiments. In our
learning framework, for the decision module, we consider two alternative
objective functions-minimizing delay and minimizing edge server utilization. We
developed an ILP based problem formulation for the two objective functions. The
experimental results show that 1) compared to static service placement,
RL-based dynamic service placement achieves fair utilization of edge server
resources and low service delay, and 2) compared to delay-optimized placement,
server utilization optimized placement utilizes resources more effectively,
achieving higher fairness with lower edge-server utilization.
Related papers
- Intelligent Mobile AI-Generated Content Services via Interactive Prompt Engineering and Dynamic Service Provisioning [55.641299901038316]
AI-generated content can organize collaborative Mobile AIGC Service Providers (MASPs) at network edges to provide ubiquitous and customized content for resource-constrained users.
Such a paradigm faces two significant challenges: 1) raw prompts often lead to poor generation quality due to users' lack of experience with specific AIGC models, and 2) static service provisioning fails to efficiently utilize computational and communication resources.
We develop an interactive prompt engineering mechanism that leverages a Large Language Model (LLM) to generate customized prompt corpora and employs Inverse Reinforcement Learning (IRL) for policy imitation.
arXiv Detail & Related papers (2025-02-17T03:05:20Z) - STaleX: A Spatiotemporal-Aware Adaptive Auto-scaling Framework for Microservices [3.0846824529023382]
This paper presents a combination of control theory, machine learning, andtemporals to address these challenges.
We propose an adaptive auto-scaling framework, STXale, that integrates features, enabling real-time resource adjustments.
Our framework accounts for features including service specifications and dependencies among services, as well as temporal variations in workload.
arXiv Detail & Related papers (2025-01-30T20:19:13Z) - Reinforcement Learning Controlled Adaptive PSO for Task Offloading in IIoT Edge Computing [0.0]
Industrial Internet of Things (IIoT) applications demand efficient task offloading to handle heavy data loads with minimal latency.
Mobile Edge Computing (MEC) brings computation closer to devices to reduce latency and server load.
We propose a novel solution combining Adaptive Particle Swarm Optimization (APSO) with Reinforcement Learning, specifically Soft Actor Critic (SAC)
arXiv Detail & Related papers (2025-01-25T13:01:54Z) - Resource Allocation for Twin Maintenance and Computing Task Processing in Digital Twin Vehicular Edge Computing Network [48.15151800771779]
Vehicle edge computing (VEC) can provide computing caching services by deploying VEC servers near vehicles.
However, VEC networks still face challenges such as high vehicle mobility.
This study examines two types of delays caused by twin processing within the network.
arXiv Detail & Related papers (2024-07-10T12:08:39Z) - DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - A Learning-based Incentive Mechanism for Mobile AIGC Service in Decentralized Internet of Vehicles [49.86094523878003]
We propose a decentralized incentive mechanism for mobile AIGC service allocation.
We employ multi-agent deep reinforcement learning to find the balance between the supply of AIGC services on RSUs and user demand for services within the IoV context.
arXiv Detail & Related papers (2024-03-29T12:46:07Z) - Edge computing service deployment and task offloading based on
multi-task high-dimensional multi-objective optimization [5.64850919046892]
This study investigates service deployment and task offloading challenges in a multi-user environment.
To ensure stable service provisioning, beyond considering latency, energy consumption, and cost, network reliability is also incorporated.
To promote equitable usage of edge servers, load balancing is introduced as a fourth task offloading objective.
arXiv Detail & Related papers (2023-12-07T07:30:47Z) - Real-time Control of Electric Autonomous Mobility-on-Demand Systems via Graph Reinforcement Learning [14.073588678179865]
Electric Autonomous Mobility-on-Demand (E-AMoD) fleets need to make several real-time decisions.
We present the E-AMoD control problem through the lens of reinforcement learning.
We propose a graph network-based framework to achieve drastically improved scalability and superior performance overoptimals.
arXiv Detail & Related papers (2023-11-09T22:57:21Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with Online Learning [55.08287089554127]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - Dynamic Resource Allocation for Metaverse Applications with Deep
Reinforcement Learning [64.75603723249837]
This work proposes a novel framework to dynamically manage and allocate different types of resources for Metaverse applications.
We first propose an effective solution to divide applications into groups, namely MetaInstances, where common functions can be shared among applications.
Then, to capture the real-time, dynamic, and uncertain characteristics of request arrival and application departure processes, we develop a semi-Markov decision process-based framework.
arXiv Detail & Related papers (2023-02-27T00:30:01Z) - DRLD-SP: A Deep Reinforcement Learning-based Dynamic Service Placement
in Edge-Enabled Internet of Vehicles [4.010371060637208]
5G and edge computing has enabled the emergence of Internet of Vehicles (IoV)
limited resources at the edge, high mobility of vehicles, increasing demand, and dynamicity in service request-types have made service placement a challenging task.
A typical static placement solution is not effective as it does not consider the traffic mobility and service dynamics.
We propose a Deep Reinforcement Learning-based Dynamic Service Placement framework with the objective of minimizing the maximum edge resource usage and service delay.
arXiv Detail & Related papers (2021-06-11T10:17:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.