Watch out for the risky actors: Assessing risk in dynamic environments
for safe driving
- URL: http://arxiv.org/abs/2110.09998v1
- Date: Tue, 19 Oct 2021 14:10:26 GMT
- Title: Watch out for the risky actors: Assessing risk in dynamic environments
for safe driving
- Authors: Saurabh Jha, Yan Miao, Zbigniew Kalbarczyk, Ravishankar K. Iyer
- Abstract summary: The risk encountered by the actor Ego depends on the driving scenario and the uncertainty associated with predicting the future trajectories of the other actors in the driving scenario.
We propose a novel risk metric to calculate the importance of each actor in the world and demonstrate its usefulness through a case study.
- Score: 7.13056075998264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driving in a dynamic environment that consists of other actors is inherently
a risky task as each actor influences the driving decision and may
significantly limit the number of choices in terms of navigation and safety
plan. The risk encountered by the Ego actor depends on the driving scenario and
the uncertainty associated with predicting the future trajectories of the other
actors in the driving scenario. However, not all objects pose a similar risk.
Depending on the object's type, trajectory, position, and the associated
uncertainty with these quantities; some objects pose a much higher risk than
others. The higher the risk associated with an actor, the more attention must
be directed towards that actor in terms of resources and safety planning. In
this paper, we propose a novel risk metric to calculate the importance of each
actor in the world and demonstrate its usefulness through a case study.
Related papers
- Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI [11.240642213359267]
Many exhaustive taxonomy are possible, and some are useful -- particularly if they reveal new risks or practical approaches to safety.
This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate?
We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, and risks from deliberate misuse.
arXiv Detail & Related papers (2023-06-12T07:55:18Z) - Watch Out for the Safety-Threatening Actors: Proactively Mitigating
Safety Hazards [5.898210877584262]
We propose a safety threat indicator (STI) using counterfactual reasoning to estimate the importance of each actor on the road with respect to its influence on the AV's safety.
Our approach reduces the accident rate for the state-of-the-art AV agent(s) in rare hazardous scenarios by more than 70%.
arXiv Detail & Related papers (2022-06-02T05:56:25Z) - A Survey of Risk-Aware Multi-Armed Bandits [84.67376599822569]
We review various risk measures of interest, and comment on their properties.
We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests.
We conclude by commenting on persisting challenges and fertile areas for future research.
arXiv Detail & Related papers (2022-05-12T02:20:34Z) - Driver-centric Risk Object Identification [25.85690304998681]
We propose a driver-centric definition of risk, i.e., risky objects influence driver behavior.
We formulate the task as a cause-effect problem and present a novel two-stage risk object identification framework.
A driver-centric Risk Object Identification dataset is curated to evaluate the proposed system.
arXiv Detail & Related papers (2021-06-24T17:27:32Z) - Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for
Automated Driving using Distributional Reinforcement Learning [0.0]
We propose a two-step approach for risk-sensitive behavior generation for self-driving vehicles.
First, we learn an optimal policy in an uncertain environment with Deep Distributional Reinforcement Learning.
During execution, the optimal risk-sensitive action is selected by applying established risk criteria.
arXiv Detail & Related papers (2021-02-05T11:45:12Z) - Deep Structured Reactive Planning [94.92994828905984]
We propose a novel data-driven, reactive planning objective for self-driving vehicles.
We show that our model outperforms a non-reactive variant in successfully completing highly complex maneuvers.
arXiv Detail & Related papers (2021-01-18T01:43:36Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference [19.71459945458985]
We propose a driver-centric definition of risk, i.e., objects influencing drivers' behavior are risky.
We present a novel two-stage risk object identification framework based on causal inference with the proposed object-level manipulable driving model.
Our framework achieves a substantial average performance boost over a strong baseline by 7.5%.
arXiv Detail & Related papers (2020-03-05T04:14:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.