Spatial-Temporal-Aware Safe Multi-Agent Reinforcement Learning of
Connected Autonomous Vehicles in Challenging Scenarios
- URL: http://arxiv.org/abs/2210.02300v1
- Date: Wed, 5 Oct 2022 14:39:07 GMT
- Title: Spatial-Temporal-Aware Safe Multi-Agent Reinforcement Learning of
Connected Autonomous Vehicles in Challenging Scenarios
- Authors: Zhili Zhang, Songyang Han, Jiangwei Wang, Fei Miao
- Abstract summary: Communication technologies enable coordination among connected and autonomous vehicles (CAVs)
We propose a framework of constrained multi-agent reinforcement learning (MARL) with a parallel safety shield for CAVs.
Results show that our proposed methodology significantly increases system safety and efficiency in challenging scenarios.
- Score: 10.37986799561165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication technologies enable coordination among connected and autonomous
vehicles (CAVs). However, it remains unclear how to utilize shared information
to improve the safety and efficiency of the CAV system. In this work, we
propose a framework of constrained multi-agent reinforcement learning (MARL)
with a parallel safety shield for CAVs in challenging driving scenarios. The
coordination mechanisms of the proposed MARL include information sharing and
cooperative policy learning, with Graph Convolutional Network (GCN)-Transformer
as a spatial-temporal encoder that enhances the agent's environment awareness.
The safety shield module with Control Barrier Functions (CBF)-based safety
checking protects the agents from taking unsafe actions. We design a
constrained multi-agent advantage actor-critic (CMAA2C) algorithm to train safe
and cooperative policies for CAVs. With the experiment deployed in the CARLA
simulator, we verify the effectiveness of the safety checking, spatial-temporal
encoder, and coordination mechanisms designed in our method by comparative
experiments in several challenging scenarios with the defined hazard vehicles
(HAZV). Results show that our proposed methodology significantly increases
system safety and efficiency in challenging scenarios.
Related papers
- OPTIMA: Optimized Policy for Intelligent Multi-Agent Systems Enables Coordination-Aware Autonomous Vehicles [9.41740133451895]
This work introduces OPTIMA, a novel distributed reinforcement learning framework for cooperative autonomous vehicle tasks.
Our goal is to improve the generality and performance of CAVs in highly complex and crowded scenarios.
arXiv Detail & Related papers (2024-10-09T03:28:45Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Trust-Aware Resilient Control and Coordination of Connected and
Automated Vehicles [11.97553028903872]
Adversarial attacks can cause safety violations resulting in collisions and traffic jams.
We propose a decentralized resilient control and coordination scheme that mitigates the effects of adversarial attacks and uncooperative CAVs.
arXiv Detail & Related papers (2023-05-26T10:57:51Z) - Shared Information-Based Safe And Efficient Behavior Planning For
Connected Autonomous Vehicles [6.896682830421197]
We design an integrated information sharing and safe multi-agent reinforcement learning framework for connected autonomous vehicles.
We first use weight pruned convolutional neural networks (CNN) to process the raw image and point cloud LIDAR data locally at each autonomous vehicle.
We then design a safe actor-critic algorithm that utilizes both a vehicle's local observation and the information received via V2V communication.
arXiv Detail & Related papers (2023-02-08T20:31:41Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Smart and Secure CAV Networks Empowered by AI-Enabled Blockchain: Next
Frontier for Intelligent Safe-Driving Assessment [17.926728975133113]
Securing a safe-driving circumstance for connected and autonomous vehicles (CAVs) continues to be a widespread concern.
We propose a novel framework of algorithm-enabled intElligent Safe-driving assessmenT (BEST) to offer a smart and reliable approach.
arXiv Detail & Related papers (2021-04-09T19:08:34Z) - A Multi-Agent Reinforcement Learning Approach For Safe and Efficient
Behavior Planning Of Connected Autonomous Vehicles [21.132777568170702]
We design an information-sharing-based reinforcement learning framework for connected autonomous vehicles.
We show that our approach can improve the CAV system's efficiency in terms of average velocity and comfort.
We construct an obstacle-at-corner scenario to show that the shared vision can help CAVs to observe obstacles earlier and take action to avoid traffic jams.
arXiv Detail & Related papers (2020-03-09T19:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.