Behavioral decision-making for urban autonomous driving in the presence
of pedestrians using Deep Recurrent Q-Network
- URL: http://arxiv.org/abs/2010.13407v1
- Date: Mon, 26 Oct 2020 08:08:06 GMT
- Title: Behavioral decision-making for urban autonomous driving in the presence
of pedestrians using Deep Recurrent Q-Network
- Authors: Niranjan Deshpande (CHROMA), Dominique Vaufreydaz (LIG), Anne
Spalanzani (CHROMA)
- Abstract summary: Decision making for autonomous driving in urban environments is challenging due to the complexity of the road structure and the uncertainty in the behavior of diverse road users.
In this work, a deep reinforcement learning based decision-making approach for high-level driving behavior is proposed for urban environments in the presence of pedestrians.
The proposed method is evaluated for dense urban scenarios and compared with a rule-based approach and results show that the proposed DRQN based driving behavior decision maker outperforms the rule-based approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision making for autonomous driving in urban environments is challenging
due to the complexity of the road structure and the uncertainty in the behavior
of diverse road users. Traditional methods consist of manually designed rules
as the driving policy, which require expert domain knowledge, are difficult to
generalize and might give sub-optimal results as the environment gets complex.
Whereas, using reinforcement learning, optimal driving policy could be learned
and improved automatically through several interactions with the environment.
However, current research in the field of reinforcement learning for autonomous
driving is mainly focused on highway setup with little to no emphasis on urban
environments. In this work, a deep reinforcement learning based decision-making
approach for high-level driving behavior is proposed for urban environments in
the presence of pedestrians. For this, the use of Deep Recurrent Q-Network
(DRQN) is explored, a method combining state-of-the art Deep Q-Network (DQN)
with a long term short term memory (LSTM) layer helping the agent gain a memory
of the environment. A 3-D state representation is designed as the input
combined with a well defined reward function to train the agent for learning an
appropriate behavior policy in a real-world like urban simulator. The proposed
method is evaluated for dense urban scenarios and compared with a rule-based
approach and results show that the proposed DRQN based driving behavior
decision maker outperforms the rule-based approach.
Related papers
- Research on Autonomous Driving Decision-making Strategies based Deep Reinforcement Learning [8.794428617785869]
Behavior decision-making subsystem is a key component of the autonomous driving system.
In this work, an advanced deep reinforcement learning model is adopted, which can autonomously learn and optimize driving strategies.
arXiv Detail & Related papers (2024-08-06T10:24:54Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Navigation In Urban Environments Amongst Pedestrians Using
Multi-Objective Deep Reinforcement Learning [0.0]
This work formulates navigation in urban environments as a multi objective reinforcement learning problem.
A deep learning variant of thresholded lexicographic Q-learning is presented for autonomous navigation amongst pedestrians.
arXiv Detail & Related papers (2021-10-11T12:15:06Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - An End-to-end Deep Reinforcement Learning Approach for the Long-term
Short-term Planning on the Frenet Space [0.0]
This paper presents a novel end-to-end continuous deep reinforcement learning approach towards autonomous cars' decision-making and motion planning.
For the first time, we define both states and action spaces on the Frenet space to make the driving behavior less variant to the road curvatures.
The algorithm generates continuoustemporal trajectories on the Frenet frame for the feedback controller to track.
arXiv Detail & Related papers (2020-11-26T02:40:07Z) - Emergent Road Rules In Multi-Agent Driving Environments [84.82583370858391]
We analyze what ingredients in driving environments cause the emergence of road rules.
We find that two crucial factors are noisy perception and agents' spatial density.
Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
arXiv Detail & Related papers (2020-11-21T09:43:50Z) - Behavior Planning at Urban Intersections through Hierarchical
Reinforcement Learning [25.50973559614565]
In this work, we propose a behavior planning structure based on reinforcement learning (RL) which is capable of performing autonomous vehicle behavior planning with a hierarchical structure in simulated urban environments.
Our algorithms can perform better than rule-based methods for elective decisions such as when to turn left between vehicles approaching from the opposite direction or possible lane-change when approaching an intersection due to lane blockage or delay in front of the ego car.
Results also show that the proposed method converges to an optimal policy faster than traditional RL methods.
arXiv Detail & Related papers (2020-11-09T19:23:26Z) - Decision-making Strategy on Highway for Autonomous Vehicles using Deep
Reinforcement Learning [6.298084785377199]
A deep reinforcement learning (DRL)-enabled decision-making policy is constructed for autonomous vehicles to address the overtaking behaviors on the highway.
A hierarchical control framework is presented to control these vehicles, which indicates the upper-level manages the driving decisions.
The DDQN-based overtaking policy could accomplish highway driving tasks efficiently and safely.
arXiv Detail & Related papers (2020-07-16T23:41:48Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.