Multi-Agent Deep Reinforcement Learning for Safe Autonomous Driving with RICS-Assisted MEC
- URL: http://arxiv.org/abs/2503.19418v1
- Date: Tue, 25 Mar 2025 07:53:50 GMT
- Title: Multi-Agent Deep Reinforcement Learning for Safe Autonomous Driving with RICS-Assisted MEC
- Authors: Xueyao Zhang, Bo Yang, Xuelin Cao, Zhiwen Yu, George C. Alexandropoulos, Yan Zhang, Merouane Debbah, Chau Yuen,
- Abstract summary: Environment sensing and fusion via onboard sensors are envisioned to be widely applied in future autonomous driving networks.<n>To improve spectrum utilization, the V2V links may reuse the same frequency spectrum with V2I links, which may cause severe interference.<n>To tackle this issue, we leverage reconfigurable intelligent computational surfaces (RICSs) to jointly enable V2I reflective links.
- Score: 36.36591743123764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Environment sensing and fusion via onboard sensors are envisioned to be widely applied in future autonomous driving networks. This paper considers a vehicular system with multiple self-driving vehicles that is assisted by multi-access edge computing (MEC), where image data collected by the sensors is offloaded from cellular vehicles to the MEC server using vehicle-to-infrastructure (V2I) links. Sensory data can also be shared among surrounding vehicles via vehicle-to-vehicle (V2V) communication links. To improve spectrum utilization, the V2V links may reuse the same frequency spectrum with V2I links, which may cause severe interference. To tackle this issue, we leverage reconfigurable intelligent computational surfaces (RICSs) to jointly enable V2I reflective links and mitigate interference appearing at the V2V links. Considering the limitations of traditional algorithms in addressing this problem, such as the assumption for quasi-static channel state information, which restricts their ability to adapt to dynamic environmental changes and leads to poor performance under frequently varying channel conditions, in this paper, we formulate the problem at hand as a Markov game. Our novel formulation is applied to time-varying channels subject to multi-user interference and introduces a collaborative learning mechanism among users. The considered optimization problem is solved via a driving safety-enabled multi-agent deep reinforcement learning (DS-MADRL) approach that capitalizes on the RICS presence. Our extensive numerical investigations showcase that the proposed reinforcement learning approach achieves faster convergence and significant enhancements in both data rate and driving safety, as compared to various state-of-the-art benchmarks.
Related papers
- Joint Adaptive OFDM and Reinforcement Learning Design for Autonomous Vehicles: Leveraging Age of Updates [2.607046313483251]
Millimeter wave (mmWave)-based frequency-division multiplexing (OFDM) stands out as a suitable alternative for high-resolution sensing and high-speed data transmission.<n>In this work, we consider an autonomous vehicle network where an AV utilizes its queue state information (QSI) and channel state information (CSI) in conjunction with reinforcement learning techniques to manage communication and sensing.
arXiv Detail & Related papers (2024-12-24T15:32:58Z) - Hybrid-Generative Diffusion Models for Attack-Oriented Twin Migration in Vehicular Metaverses [58.264499654343226]
Vehicle Twins (VTs) are digital twins that provide immersive virtual services for Vehicular Metaverse Users (VMUs)
High mobility of vehicles, uneven deployment of edge servers, and potential security threats pose challenges to achieving efficient and reliable VT migrations.
We propose a secure and reliable VT migration framework in vehicular metaverses.
arXiv Detail & Related papers (2024-07-05T11:11:33Z) - Deep-Reinforcement-Learning-Based AoI-Aware Resource Allocation for RIS-Aided IoV Networks [43.443526528832145]
We propose a RIS-assisted internet of vehicles (IoV) network, considering the vehicle-to-everything (V2X) communication method.
In order to improve the timeliness of vehicle-to-infrastructure (V2I) links and the stability of vehicle-to-vehicle (V2V) links, we introduce the age of information (AoI) model and the payload transmission probability model.
arXiv Detail & Related papers (2024-06-17T06:16:07Z) - Enhancing Track Management Systems with Vehicle-To-Vehicle Enabled Sensor Fusion [0.0]
This paper proposes a novel Vehicle-to-Vehicle (V2V) enabled track management system.
The core innovation lies in the creation of independent priority track lists, consisting of fused detections validated through V2V communication.
The proposed system considers the implications of falsification of V2X signals which is combated through an initial vehicle identification process using detection from perception sensors.
arXiv Detail & Related papers (2024-04-26T20:54:44Z) - Deep Reinforcement Learning Algorithms for Hybrid V2X Communication: A
Benchmarking Study [39.214784277182304]
This paper addresses the vertical handover problem in V2X using Deep Reinforcement Learning (DRL) algorithms.
The benchmarked algorithms outperform the current state-of-the-art approaches in terms of redundancy and usage rate of V-VLC headlights.
arXiv Detail & Related papers (2023-10-04T12:32:14Z) - Reinforcement Learning for Joint V2I Network Selection and Autonomous
Driving Policies [14.518558523319518]
Vehicle-to-Infrastructure (V2I) communication is becoming critical for the enhanced reliability of autonomous vehicles (AVs)
It is critical to simultaneously optimize the AVs' network selection and driving policies in order to minimize road collisions.
We develop a reinforcement learning framework to characterize efficient network selection and autonomous driving policies.
arXiv Detail & Related papers (2022-08-03T04:33:02Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.