Personalized Lane Change Decision Algorithm Using Deep Reinforcement
Learning Approach
- URL: http://arxiv.org/abs/2112.13646v1
- Date: Fri, 17 Dec 2021 10:16:43 GMT
- Title: Personalized Lane Change Decision Algorithm Using Deep Reinforcement
Learning Approach
- Authors: Daofei Li and Ao Liu
- Abstract summary: Driver-in-the-loop experiments are carried out on a 6-Degree-of-Freedom driving simulator.
Personalization indicators are selected to describe the driver preferences in lane change decision.
Deep reinforcement learning (RL) approach is applied to design human-like agents for automated lane change decision.
- Score: 4.681908782544996
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To develop driving automation technologies for human, a human-centered
methodology should be adopted for ensured safety and satisfactory user
experience. Automated lane change decision in dense highway traffic is
challenging, especially when considering the personalized preferences of
different drivers. To fulfill human driver centered decision algorithm
development, we carry out driver-in-the-loop experiments on a
6-Degree-of-Freedom driving simulator. Based on the analysis of the lane change
data by drivers of three specific styles,personalization indicators are
selected to describe the driver preferences in lane change decision. Then a
deep reinforcement learning (RL) approach is applied to design human-like
agents for automated lane change decision, with refined reward and loss
functions to capture the driver preferences.The trained RL agents and benchmark
agents are tested in a two-lane highway driving scenario, and by comparing the
agents with the specific drivers at the same initial states of lane change, the
statistics show that the proposed algorithm can guarantee higher consistency of
lane change decision preferences. The driver personalization indicators and the
proposed RL-based lane change decision algorithm are promising to contribute in
automated lane change system developing.
Related papers
- Personalized and Context-aware Route Planning for Edge-assisted Vehicles [11.39182190564773]
We propose a novel approach based on graph neural networks (GNNs) and deep reinforcement learning (DRL)
By analyzing the historical trajectories of individual drivers, we classify it with relevant road attributes as indicators of driver preferences.
We evaluate our proposed GNN-based DRL framework using a real-world road network and demonstrate its ability to accommodate driver preferences.
arXiv Detail & Related papers (2024-07-25T12:14:12Z) - Investigating Personalized Driving Behaviors in Dilemma Zones: Analysis and Prediction of Stop-or-Go Decisions [15.786599260846057]
We develop a Personalized Transformer to predict individual drivers' stop-or-go decisions.
The results show that the Personalized Transformer improves the accuracy of predicting driver decision-making in the dilemma zone by 3.7% to 12.6%.
arXiv Detail & Related papers (2024-05-06T21:39:25Z) - DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving [81.04174379726251]
This paper collects a comprehensive end-to-end driving dataset named DriveCoT.
It contains sensor data, control decisions, and chain-of-thought labels to indicate the reasoning process.
We propose a baseline model called DriveCoT-Agent, trained on our dataset, to generate chain-of-thought predictions and final decisions.
arXiv Detail & Related papers (2024-03-25T17:59:01Z) - DME-Driver: Integrating Human Decision Logic and 3D Scene Perception in
Autonomous Driving [65.04871316921327]
This paper introduces a new autonomous driving system that enhances the performance and reliability of autonomous driving system.
DME-Driver utilizes a powerful vision language model as the decision-maker and a planning-oriented perception model as the control signal generator.
By leveraging this dataset, our model achieves high-precision planning accuracy through a logical thinking process.
arXiv Detail & Related papers (2024-01-08T03:06:02Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Prediction Based Decision Making for Autonomous Highway Driving [3.6818636539023175]
This paper proposes a Prediction-based Deep Reinforcement Learning (PDRL) decision-making model.
It considers the manoeuvre intentions of surrounding vehicles in the decision-making process for highway driving.
The results show that the proposed PDRL model improves the decision-making performance compared to a Deep Reinforcement Learning (DRL) model by decreasing collision numbers.
arXiv Detail & Related papers (2022-09-05T19:28:30Z) - Reinforcement Learning Based Safe Decision Making for Highway Autonomous
Driving [1.995792341399967]
We develop a safe decision-making method for self-driving cars in a multi-lane, single-agent setting.
The proposed approach utilizes deep reinforcement learning to achieve a high-level policy for safe tactical decision-making.
arXiv Detail & Related papers (2021-05-13T19:17:30Z) - Emergent Road Rules In Multi-Agent Driving Environments [84.82583370858391]
We analyze what ingredients in driving environments cause the emergence of road rules.
We find that two crucial factors are noisy perception and agents' spatial density.
Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
arXiv Detail & Related papers (2020-11-21T09:43:50Z) - Learning Personalized Discretionary Lane-Change Initiation for Fully
Autonomous Driving Based on Reinforcement Learning [11.54360350026252]
Authors present a novel method to learn the personalized tactic of discretionary lane-change initiation for fully autonomous vehicles.
A reinforcement learning technique is employed to learn how to initiate lane changes from traffic context, the action of a self-driving vehicle, and in-vehicle user feedback.
arXiv Detail & Related papers (2020-10-29T06:21:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.