A Conflicts-free, Speed-lossless KAN-based Reinforcement Learning Decision System for Interactive Driving in Roundabouts
- URL: http://arxiv.org/abs/2408.08242v2
- Date: Fri, 12 Sep 2025 16:03:33 GMT
- Title: A Conflicts-free, Speed-lossless KAN-based Reinforcement Learning Decision System for Interactive Driving in Roundabouts
- Authors: Zhihao Lin, Zhen Tian, Jianglin Lan, Qi Zhang, Ziyang Ye, Hanyang Zhuang, Xianxian Zhao,
- Abstract summary: This paper presents a learning-based algorithm that promotes safe and efficient driving across varying roundabout traffic conditions.<n>A deep Q-learning network is used to learn optimal strategies, while a Kolmogorov-Arnold Network (KAN) improves the AVs' environmental understanding.<n> Experimental results demonstrate that the proposed system consistently outperforms state-of-the-art methods.
- Score: 16.714573474722282
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Safety and efficiency are crucial for autonomous driving in roundabouts, especially mixed traffic with both autonomous vehicles (AVs) and human-driven vehicles. This paper presents a learning-based algorithm that promotes safe and efficient driving across varying roundabout traffic conditions. A deep Q-learning network is used to learn optimal strategies in complex multi-vehicle roundabout scenarios, while a Kolmogorov-Arnold Network (KAN) improves the AVs' environmental understanding. To further enhance safety, an action inspector filters unsafe actions, and a route planner optimizes driving efficiency. Moreover, model predictive control ensures stability and precision in execution. Experimental results demonstrate that the proposed system consistently outperforms state-of-the-art methods, achieving fewer collisions, reduced travel time, and stable training with smooth reward convergence.
Related papers
- Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - Exploring the impact of traffic signal control and connected and automated vehicles on intersections safety: A deep reinforcement learning approach [2.681732331705502]
The study employs a Deep Q Network (DQN) to regulate traffic signals and driving behaviors of both CAVs and Human Drive Vehicles (HDVs)
The findings demonstrate a significant reduction in rear-end and crossing conflicts through the combined implementation of CAVs and DQNs-based traffic signal control.
arXiv Detail & Related papers (2024-05-29T16:17:19Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Automatic driving lane change safety prediction model based on LSTM [3.8749946206111603]
The trajectory prediction method based on LSTM network has obvious advantages in predicting the trajectory in the long time domain.
The research results show that compared with the traditional model-based method, the trajectory prediction method based on LSTM network has obvious advantages in predicting the trajectory in the long time domain.
arXiv Detail & Related papers (2024-02-28T12:34:04Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - SafeLight: A Reinforcement Learning Method toward Collision-free Traffic
Signal Control [5.862792724739738]
One-quarter of road accidents in the U.S. happen at intersections due to problematic signal timing.
We propose a safety-enhanced residual reinforcement learning method (SafeLight)
Our method can significantly reduce collisions while increasing traffic mobility.
arXiv Detail & Related papers (2022-11-20T05:09:12Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - A Cooperation-Aware Lane Change Method for Autonomous Vehicles [16.937363492078426]
This paper presents a cooperation-aware lane change method utilizing interactions between vehicles.
We first propose an interactive trajectory prediction method to explore possible cooperations between an AV and the others.
We then propose a motion planning algorithm based on model predictive control (MPC), which incorporates AV's decision and surrounding vehicles' interactive behaviors into constraints.
arXiv Detail & Related papers (2022-01-26T04:45:45Z) - Integrated Decision and Control at Multi-Lane Intersections with Mixed
Traffic Flow [6.233422723925688]
This paper develops a learning-based algorithm to deal with complex intersections with mixed traffic flows.
We first consider different velocity models for green and red lights in the training process and use a finite state machine to handle different modes of light transformation.
Then we design different types of distance constraints for vehicles, traffic lights, pedestrians, bicycles respectively and formulize the constrained optimal control problems.
arXiv Detail & Related papers (2021-08-30T07:55:32Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - SAINT-ACC: Safety-Aware Intelligent Adaptive Cruise Control for
Autonomous Vehicles Using Deep Reinforcement Learning [17.412117389855226]
SAINT-ACC: Setyaf-Aware Intelligent ACC system (SAINT-ACC) is designed to achieve simultaneous optimization of traffic efficiency, driving safety, and driving comfort.
A novel dual RL agent-based approach is developed to seek and adapt the optimal balance between traffic efficiency and driving safety/comfort.
arXiv Detail & Related papers (2021-03-06T14:01:29Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Automated Lane Change Strategy using Proximal Policy Optimization-based
Deep Reinforcement Learning [10.909595997847443]
Lane-change maneuvers are commonly executed by drivers to follow a certain routing plan, overtake a slower vehicle, adapt to a merging lane ahead, etc.
In this study, we propose an automated lane change strategy using proximal policy optimization-based deep reinforcement learning.
The trained agent is able to learn a smooth, safe, and efficient driving policy to make lane-change decisions.
arXiv Detail & Related papers (2020-02-07T08:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.