Efficient Connected and Automated Driving System with Multi-agent Graph
Reinforcement Learning
- URL: http://arxiv.org/abs/2007.02794v5
- Date: Fri, 22 Oct 2021 21:17:06 GMT
- Title: Efficient Connected and Automated Driving System with Multi-agent Graph
Reinforcement Learning
- Authors: Tianyu Shi, Jiawei Wang, Yuankai Wu, Luis Miranda-Moreno, Lijun Sun
- Abstract summary: Connected and automated vehicles (CAVs) have attracted more and more attention recently.
We focus on how to improve the outcomes of the total transportation system by allowing each automated vehicle to learn cooperation with each other.
- Score: 22.369111982782634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Connected and automated vehicles (CAVs) have attracted more and more
attention recently. The fast actuation time allows them having the potential to
promote the efficiency and safety of the whole transportation system. Due to
technical challenges, there will be a proportion of vehicles that can be
equipped with automation while other vehicles are without automation. Instead
of learning a reliable behavior for ego automated vehicle, we focus on how to
improve the outcomes of the total transportation system by allowing each
automated vehicle to learn cooperation with each other and regulate
human-driven traffic flow. One of state of the art method is using
reinforcement learning to learn intelligent decision making policy. However,
direct reinforcement learning framework cannot improve the performance of the
whole system. In this article, we demonstrate that considering the problem in
multi-agent setting with shared policy can help achieve better system
performance than non-shared policy in single-agent setting. Furthermore, we
find that utilization of attention mechanism on interaction features can
capture the interplay between each agent in order to boost cooperation. To the
best of our knowledge, while previous automated driving studies mainly focus on
enhancing individual's driving performance, this work serves as a starting
point for research on system-level multi-agent cooperation performance using
graph information sharing. We conduct extensive experiments in car-following
and unsignalized intersection settings. The results demonstrate that CAVs
controlled by our method can achieve the best performance against several state
of the art baselines.
Related papers
- SPformer: A Transformer Based DRL Decision Making Method for Connected Automated Vehicles [9.840325772591024]
We propose a CAV decision-making architecture based on transformer and reinforcement learning algorithms.
A learnable policy token is used as the learning medium of the multi-vehicle joint policy.
Our model can make good use of all the state information of vehicles in traffic scenario.
arXiv Detail & Related papers (2024-09-23T15:16:35Z) - Learning Driver Models for Automated Vehicles via Knowledge Sharing and
Personalization [2.07180164747172]
This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization.
It finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication.
arXiv Detail & Related papers (2023-08-31T17:18:15Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Automatic Intersection Management in Mixed Traffic Using Reinforcement
Learning and Graph Neural Networks [0.5801044612920815]
Connected automated driving has the potential to significantly improve urban traffic efficiency.
Cooperative behavior planning can be employed to jointly optimize the motion of multiple vehicles.
The present work proposes to leverage reinforcement learning and a graph-based scene representation for cooperative multi-agent planning.
arXiv Detail & Related papers (2023-01-30T08:21:18Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Multi-Agent Car Parking using Reinforcement Learning [0.0]
This study applies reinforcement learning to the problem of multi-agent car parking.
We design and implement a flexible car parking environment in the form of a Markov decision process with independent learners.
We obtain models parking up to 7 cars with over a 98.1% success rate, significantly beating existing single-agent models.
arXiv Detail & Related papers (2022-06-22T16:50:04Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - A Multi-Agent Reinforcement Learning Approach For Safe and Efficient
Behavior Planning Of Connected Autonomous Vehicles [21.132777568170702]
We design an information-sharing-based reinforcement learning framework for connected autonomous vehicles.
We show that our approach can improve the CAV system's efficiency in terms of average velocity and comfort.
We construct an obstacle-at-corner scenario to show that the shared vision can help CAVs to observe obstacles earlier and take action to avoid traffic jams.
arXiv Detail & Related papers (2020-03-09T19:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.