Multi-task Safe Reinforcement Learning for Navigating Intersections in
Dense Traffic
- URL: http://arxiv.org/abs/2202.09644v1
- Date: Sat, 19 Feb 2022 17:09:46 GMT
- Title: Multi-task Safe Reinforcement Learning for Navigating Intersections in
Dense Traffic
- Authors: Yuqi Liu, Qichao Zhang, Dongbin Zhao
- Abstract summary: Multi-task intersection navigation is still a challenging task for autonomous driving.
For the human driver, the negotiation skill with other interactive vehicles is the key to guarantee safety and efficiency.
We formulate a multi-task safe reinforcement learning with social attention to improve the safety and efficiency when interacting with other traffic participants.
- Score: 10.085223486314929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task intersection navigation including the unprotected turning left,
turning right, and going straight in dense traffic is still a challenging task
for autonomous driving. For the human driver, the negotiation skill with other
interactive vehicles is the key to guarantee safety and efficiency. However, it
is hard to balance the safety and efficiency of the autonomous vehicle for
multi-task intersection navigation. In this paper, we formulate a multi-task
safe reinforcement learning with social attention to improve the safety and
efficiency when interacting with other traffic participants. Specifically, the
social attention module is used to focus on the states of negotiation vehicles.
In addition, a safety layer is added to the multi-task reinforcement learning
framework to guarantee safe negotiation. We compare the experiments in the
simulator SUMO with abundant traffic flows and CARLA with high-fidelity vehicle
models, which both show that the proposed algorithm can improve safety with
consistent traffic efficiency for multi-task intersection navigation.
Related papers
- A Conflicts-free, Speed-lossless KAN-based Reinforcement Learning Decision System for Interactive Driving in Roundabouts [17.434924472015812]
This paper introduces a learning-based algorithm tailored to foster safe and efficient driving behaviors in roundabouts.
The proposed algorithm employs a deep Q-learning network to learn safe and efficient driving strategies in complex multi-vehicle roundabouts.
The results show that our proposed system consistently achieves safe and efficient driving whilst maintaining a stable training process.
arXiv Detail & Related papers (2024-08-15T16:10:25Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning [62.997667081978825]
We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
arXiv Detail & Related papers (2023-07-27T01:04:57Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Multi-Task Conditional Imitation Learning for Autonomous Navigation at
Crowded Intersections [4.961474432432092]
We focus on autonomous navigation at crowded intersections that require interaction with pedestrians.
A multi-task conditional imitation learning framework is proposed to adapt both lateral and longitudinal control tasks.
A new benchmark called IntersectNav is developed and human demonstrations are provided.
arXiv Detail & Related papers (2022-02-21T11:13:59Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - L2B: Learning to Balance the Safety-Efficiency Trade-off in Interactive
Crowd-aware Robot Navigation [11.893324664457548]
Learning to Balance (L2B) framework enables mobile robot agents to steer safely towards their destinations by avoiding collisions with a crowd.
We observe that the safety and efficiency requirements in crowd-aware navigation have a trade-off in the presence of social dilemmas between the agent and the crowd.
We evaluate our L2B framework in a challenging crowd simulation and demonstrate its superiority, in terms of both navigation success and collision rate, over a state-of-the-art navigation approach.
arXiv Detail & Related papers (2020-03-20T11:40:29Z) - A Multi-Agent Reinforcement Learning Approach For Safe and Efficient
Behavior Planning Of Connected Autonomous Vehicles [21.132777568170702]
We design an information-sharing-based reinforcement learning framework for connected autonomous vehicles.
We show that our approach can improve the CAV system's efficiency in terms of average velocity and comfort.
We construct an obstacle-at-corner scenario to show that the shared vision can help CAVs to observe obstacles earlier and take action to avoid traffic jams.
arXiv Detail & Related papers (2020-03-09T19:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.