A Conflicts-free, Speed-lossless KAN-based Reinforcement Learning Decision System for Interactive Driving in Roundabouts
- URL: http://arxiv.org/abs/2408.08242v1
- Date: Thu, 15 Aug 2024 16:10:25 GMT
- Title: A Conflicts-free, Speed-lossless KAN-based Reinforcement Learning Decision System for Interactive Driving in Roundabouts
- Authors: Zhihao Lin, Zhen Tian, Qi Zhang, Ziyang Ye, Hanyang Zhuang, Jianglin Lan,
- Abstract summary: This paper introduces a learning-based algorithm tailored to foster safe and efficient driving behaviors in roundabouts.
The proposed algorithm employs a deep Q-learning network to learn safe and efficient driving strategies in complex multi-vehicle roundabouts.
The results show that our proposed system consistently achieves safe and efficient driving whilst maintaining a stable training process.
- Score: 17.434924472015812
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Safety and efficiency are crucial for autonomous driving in roundabouts, especially in the context of mixed traffic where autonomous vehicles (AVs) and human-driven vehicles coexist. This paper introduces a learning-based algorithm tailored to foster safe and efficient driving behaviors across varying levels of traffic flows in roundabouts. The proposed algorithm employs a deep Q-learning network to effectively learn safe and efficient driving strategies in complex multi-vehicle roundabouts. Additionally, a KAN (Kolmogorov-Arnold network) enhances the AVs' ability to learn their surroundings robustly and precisely. An action inspector is integrated to replace dangerous actions to avoid collisions when the AV interacts with the environment, and a route planner is proposed to enhance the driving efficiency and safety of the AVs. Moreover, a model predictive control is adopted to ensure stability and precision of the driving actions. The results show that our proposed system consistently achieves safe and efficient driving whilst maintaining a stable training process, as evidenced by the smooth convergence of the reward function and the low variance in the training curves across various traffic flows. Compared to state-of-the-art benchmarks, the proposed algorithm achieves a lower number of collisions and reduced travel time to destination.
Related papers
- Exploring the impact of traffic signal control and connected and automated vehicles on intersections safety: A deep reinforcement learning approach [2.681732331705502]
The study employs a Deep Q Network (DQN) to regulate traffic signals and driving behaviors of both CAVs and Human Drive Vehicles (HDVs)
The findings demonstrate a significant reduction in rear-end and crossing conflicts through the combined implementation of CAVs and DQNs-based traffic signal control.
arXiv Detail & Related papers (2024-05-29T16:17:19Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - SafeLight: A Reinforcement Learning Method toward Collision-free Traffic
Signal Control [5.862792724739738]
One-quarter of road accidents in the U.S. happen at intersections due to problematic signal timing.
We propose a safety-enhanced residual reinforcement learning method (SafeLight)
Our method can significantly reduce collisions while increasing traffic mobility.
arXiv Detail & Related papers (2022-11-20T05:09:12Z) - A Cooperation-Aware Lane Change Method for Autonomous Vehicles [16.937363492078426]
This paper presents a cooperation-aware lane change method utilizing interactions between vehicles.
We first propose an interactive trajectory prediction method to explore possible cooperations between an AV and the others.
We then propose a motion planning algorithm based on model predictive control (MPC), which incorporates AV's decision and surrounding vehicles' interactive behaviors into constraints.
arXiv Detail & Related papers (2022-01-26T04:45:45Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - SAINT-ACC: Safety-Aware Intelligent Adaptive Cruise Control for
Autonomous Vehicles Using Deep Reinforcement Learning [17.412117389855226]
SAINT-ACC: Setyaf-Aware Intelligent ACC system (SAINT-ACC) is designed to achieve simultaneous optimization of traffic efficiency, driving safety, and driving comfort.
A novel dual RL agent-based approach is developed to seek and adapt the optimal balance between traffic efficiency and driving safety/comfort.
arXiv Detail & Related papers (2021-03-06T14:01:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.