Experimental Validation of User Experience-focused Dynamic Onboard Service Orchestration for Software Defined Vehicles
- URL: http://arxiv.org/abs/2410.11847v1
- Date: Mon, 30 Sep 2024 06:50:51 GMT
- Title: Experimental Validation of User Experience-focused Dynamic Onboard Service Orchestration for Software Defined Vehicles
- Authors: Pierre Laclau, Stéphane Bonnet, Bertrand Ducourthial, Trista Lin, Xiaoting Li,
- Abstract summary: Software Defined Vehicles (SDVs) have emerged as a promising solution.
They integrate dynamic onboard service management to handle the large variety of user-requested services during vehicle operation.
Allocating onboard resources efficiently in this setting is a challenging task, as it requires a balance between maximizing user experience and guaranteeing mixed-criticality Quality-of-Service (QoS) network requirements.
- Score: 28.56609990409653
- License:
- Abstract: In response to the growing need for dynamic software features in automobiles, Software Defined Vehicles (SDVs) have emerged as a promising solution. They integrate dynamic onboard service management to handle the large variety of user-requested services during vehicle operation. Allocating onboard resources efficiently in this setting is a challenging task, as it requires a balance between maximizing user experience and guaranteeing mixed-criticality Quality-of-Service (QoS) network requirements. Our previous research introduced a dynamic resource-based onboard service orchestration algorithm. This algorithm considers real-time invehicle and V2X network health, along with onboard resource constraints, to globally select degraded modes for onboard applications. It maximizes the overall user experience at all times while being embeddable onboard for on-the-fly decisionmaking. A key enabler of this approach is the introduction of the Automotive eXperience Integrity Level (AXIL), a metric expressing runtime priority for non-safety-critical applications. While initial simulation results demonstrated the algorithm's effectiveness, a comprehensive performance assessment would greatly contribute in validating its industrial feasibility. In this current work, we present experimental results obtained from a dedicated test bench. These results illustrate, validate, and assess the practicality of our proposed solution, providing a solid foundation for the continued advancement of dynamic onboard service orchestration in SDVs.
Related papers
- DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - A Learning-based Incentive Mechanism for Mobile AIGC Service in Decentralized Internet of Vehicles [49.86094523878003]
We propose a decentralized incentive mechanism for mobile AIGC service allocation.
We employ multi-agent deep reinforcement learning to find the balance between the supply of AIGC services on RSUs and user demand for services within the IoV context.
arXiv Detail & Related papers (2024-03-29T12:46:07Z) - Real-time Control of Electric Autonomous Mobility-on-Demand Systems via Graph Reinforcement Learning [14.073588678179865]
Electric Autonomous Mobility-on-Demand (E-AMoD) fleets need to make several real-time decisions.
We present the E-AMoD control problem through the lens of reinforcement learning.
We propose a graph network-based framework to achieve drastically improved scalability and superior performance overoptimals.
arXiv Detail & Related papers (2023-11-09T22:57:21Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - Optimistic Active Exploration of Dynamical Systems [52.91573056896633]
We develop an algorithm for active exploration called OPAX.
We show how OPAX can be reduced to an optimal control problem that can be solved at each episode.
Our experiments show that OPAX is not only theoretically sound but also performs well for zero-shot planning on novel downstream tasks.
arXiv Detail & Related papers (2023-06-21T16:26:59Z) - A multi-functional simulation platform for on-demand ride service
operations [15.991607428235257]
We propose a novel multi-functional and open-sourced simulation platform for ride-sourcing systems.
It can simulate the behaviors and movements of various agents on a real transportation network.
It provides a few accessible portals for users to train and test various optimization algorithms.
arXiv Detail & Related papers (2023-03-22T06:25:19Z) - Scalable Vehicle Re-Identification via Self-Supervision [66.2562538902156]
Vehicle Re-Identification is one of the key elements in city-scale vehicle analytics systems.
Many state-of-the-art solutions for vehicle re-id mostly focus on improving the accuracy on existing re-id benchmarks and often ignore computational complexity.
We propose a simple yet effective hybrid solution empowered by self-supervised training which only uses a single network during inference time.
arXiv Detail & Related papers (2022-05-16T12:14:42Z) - DRLD-SP: A Deep Reinforcement Learning-based Dynamic Service Placement
in Edge-Enabled Internet of Vehicles [4.010371060637208]
5G and edge computing has enabled the emergence of Internet of Vehicles (IoV)
limited resources at the edge, high mobility of vehicles, increasing demand, and dynamicity in service request-types have made service placement a challenging task.
A typical static placement solution is not effective as it does not consider the traffic mobility and service dynamics.
We propose a Deep Reinforcement Learning-based Dynamic Service Placement framework with the objective of minimizing the maximum edge resource usage and service delay.
arXiv Detail & Related papers (2021-06-11T10:17:27Z) - Reinforcement Learning-based Dynamic Service Placement in Vehicular
Networks [4.010371060637208]
complexity of traffic mobility patterns and dynamics in the requests for different types of services has made service placement a challenging task.
A typical static placement solution is not effective as it does not consider the traffic mobility and service dynamics.
We propose a reinforcement learning-based dynamic (RL-Dynamic) service placement framework to find the optimal placement of services at the edge servers.
arXiv Detail & Related papers (2021-05-31T15:01:35Z) - Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning [52.2663102239029]
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle on idle-hailing platforms.
Our approach learns ride-based state-value function using a batch training algorithm with deep value.
We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency.
arXiv Detail & Related papers (2021-03-08T05:34:05Z) - Fast Approximate Solutions using Reinforcement Learning for Dynamic
Capacitated Vehicle Routing with Time Windows [3.5232085374661284]
This paper develops an inherently parallelised, fast, approximate learning-based solution to the generic class of Capacitated Vehicle Routing with Time Windows and Dynamic Routing (CVRP-TWDR)
Considering vehicles in a fleet as decentralised agents, we postulate that using reinforcement learning (RL) based adaptation is a key enabler for real-time route formation in a dynamic environment.
arXiv Detail & Related papers (2021-02-24T06:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.