Risk-Aware High-level Decisions for Automated Driving at Occluded
Intersections with Reinforcement Learning
- URL: http://arxiv.org/abs/2004.04450v1
- Date: Thu, 9 Apr 2020 09:44:41 GMT
- Title: Risk-Aware High-level Decisions for Automated Driving at Occluded
Intersections with Reinforcement Learning
- Authors: Danial Kamran, Carlos Fernandez Lopez, Martin Lauer, Christoph Stiller
- Abstract summary: We propose a generic risk-aware DQN approach to learn high level actions for driving through unsignalized intersections.
The proposed state representation provides lane based information which allows to be used for multi-lane scenarios.
We also propose a risk based reward function which punishes risky situations instead of only collision failures.
- Score: 16.69903761648675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning is nowadays a popular framework for solving different
decision making problems in automated driving. However, there are still some
remaining crucial challenges that need to be addressed for providing more
reliable policies. In this paper, we propose a generic risk-aware DQN approach
in order to learn high level actions for driving through unsignalized occluded
intersections. The proposed state representation provides lane based
information which allows to be used for multi-lane scenarios. Moreover, we
propose a risk based reward function which punishes risky situations instead of
only collision failures. Such rewarding approach helps to incorporate risk
prediction into our deep Q network and learn more reliable policies which are
safer in challenging situations. The efficiency of the proposed approach is
compared with a DQN learned with conventional collision based rewarding scheme
and also with a rule-based intersection navigation policy. Evaluation results
show that the proposed approach outperforms both of these methods. It provides
safer actions than collision-aware DQN approach and is less overcautious than
the rule-based policy.
Related papers
- Optimal Transport-Assisted Risk-Sensitive Q-Learning [4.14360329494344]
This paper presents a risk-sensitive Q-learning algorithm that leverages optimal transport theory to enhance the agent safety.
We validate the proposed algorithm in a Gridworld environment.
arXiv Detail & Related papers (2024-06-17T17:32:25Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Uniformly Safe RL with Objective Suppression for Multi-Constraint Safety-Critical Applications [73.58451824894568]
The widely adopted CMDP model constrains the risks in expectation, which makes room for dangerous behaviors in long-tail states.
In safety-critical domains, such behaviors could lead to disastrous outcomes.
We propose Objective Suppression, a novel method that adaptively suppresses the task reward maximizing objectives according to a safety critic.
arXiv Detail & Related papers (2024-02-23T23:22:06Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning [62.997667081978825]
We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
arXiv Detail & Related papers (2023-07-27T01:04:57Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Minimizing Safety Interference for Safe and Comfortable Automated
Driving with Distributional Reinforcement Learning [3.923354711049903]
We propose a distributional reinforcement learning framework to learn adaptive policies that can tune their level of conservativity at run-time based on the desired comfort and utility.
We show that our algorithm learns policies that can still drive reliable when the perception noise is two times higher than the training configuration for automated merging and crossing at occluded intersections.
arXiv Detail & Related papers (2021-07-15T13:36:55Z) - Reinforcement Learning Based Safe Decision Making for Highway Autonomous
Driving [1.995792341399967]
We develop a safe decision-making method for self-driving cars in a multi-lane, single-agent setting.
The proposed approach utilizes deep reinforcement learning to achieve a high-level policy for safe tactical decision-making.
arXiv Detail & Related papers (2021-05-13T19:17:30Z) - Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for
Automated Driving using Distributional Reinforcement Learning [0.0]
We propose a two-step approach for risk-sensitive behavior generation for self-driving vehicles.
First, we learn an optimal policy in an uncertain environment with Deep Distributional Reinforcement Learning.
During execution, the optimal risk-sensitive action is selected by applying established risk criteria.
arXiv Detail & Related papers (2021-02-05T11:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.