Don't Get Yourself into Trouble! Risk-aware Decision-Making for
Autonomous Vehicles
- URL: http://arxiv.org/abs/2106.04625v1
- Date: Tue, 8 Jun 2021 18:24:02 GMT
- Title: Don't Get Yourself into Trouble! Risk-aware Decision-Making for
Autonomous Vehicles
- Authors: Kasra Mokhtari, Alan R. Wagner
- Abstract summary: We show that risk could be characterized by two components: 1) the probability of an undesirable outcome and 2) an estimate of how undesirable the outcome is (loss)
We developed a risk-based decision-making framework for the autonomous vehicle that integrates the high-level risk-based path planning with the reinforcement learning-based low-level control.
This work can improve safety by allowing an autonomous vehicle to one day avoid and react to risky situations.
- Score: 4.94950858749529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Risk is traditionally described as the expected likelihood of an undesirable
outcome, such as collisions for autonomous vehicles. Accurately predicting risk
or potentially risky situations is critical for the safe operation of
autonomous vehicles. In our previous work, we showed that risk could be
characterized by two components: 1) the probability of an undesirable outcome
and 2) an estimate of how undesirable the outcome is (loss). This paper is an
extension to our previous work. In this paper, using our trained deep
reinforcement learning model for navigating around crowds, we developed a
risk-based decision-making framework for the autonomous vehicle that integrates
the high-level risk-based path planning with the reinforcement learning-based
low-level control. We evaluated our method in a high-fidelity simulation such
as CARLA. This work can improve safety by allowing an autonomous vehicle to one
day avoid and react to risky situations.
Related papers
- A Safe Self-evolution Algorithm for Autonomous Driving Based on Data-Driven Risk Quantification Model [14.398857940603495]
This paper proposes a safe self-evolution algorithm for autonomous driving based on data-driven risk quantification model.
To prevent the impact of over-conservative safety guarding policies on the self-evolution capability of the algorithm, a safety-evolutionary decision-control integration algorithm with adjustable safety limits is proposed.
arXiv Detail & Related papers (2024-08-23T02:52:35Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Risk-anticipatory autonomous driving strategies considering vehicles' weights, based on hierarchical deep reinforcement learning [12.014977175887767]
This study develops an autonomous driving strategy based on risk anticipation, considering the weights of surrounding vehicles.
A risk indicator integrating surrounding vehicles weights, based on the risk field theory, is proposed and incorporated into autonomous driving decisions.
An indicator, potential collision energy in conflicts, is newly proposed to evaluate the performance of the developed AV driving strategy.
arXiv Detail & Related papers (2023-12-27T06:03:34Z) - Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning [62.997667081978825]
We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
arXiv Detail & Related papers (2023-07-27T01:04:57Z) - How to Learn from Risk: Explicit Risk-Utility Reinforcement Learning for
Efficient and Safe Driving Strategies [1.496194593196997]
This paper proposes SafeDQN, which allows to make the behavior of autonomous vehicles safe and interpretable while still being efficient.
We show that SafeDQN finds interpretable and safe driving policies for a variety of scenarios and demonstrate how state-of-the-art saliency techniques can help to assess both risk and utility.
arXiv Detail & Related papers (2022-03-16T05:51:22Z) - Risk Measurement, Risk Entropy, and Autonomous Driving Risk Modeling [0.0]
This article examines the emerging technical difficulties, new ideas, and methods of risk modeling under autonomous driving scenarios.
It provides technical feasibility for realizing risk assessment and car insurance pricing under a computer simulation environment.
arXiv Detail & Related papers (2021-09-15T11:00:18Z) - Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for
Automated Driving using Distributional Reinforcement Learning [0.0]
We propose a two-step approach for risk-sensitive behavior generation for self-driving vehicles.
First, we learn an optimal policy in an uncertain environment with Deep Distributional Reinforcement Learning.
During execution, the optimal risk-sensitive action is selected by applying established risk criteria.
arXiv Detail & Related papers (2021-02-05T11:45:12Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z) - Safe Reinforcement Learning via Curriculum Induction [94.67835258431202]
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
Existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations.
This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor.
arXiv Detail & Related papers (2020-06-22T10:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.