Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference
- URL: http://arxiv.org/abs/2003.02425v2
- Date: Mon, 3 Aug 2020 04:28:21 GMT
- Title: Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference
- Authors: Chengxi Li, Stanley H. Chan and Yi-Ting Chen
- Abstract summary: We propose a driver-centric definition of risk, i.e., objects influencing drivers' behavior are risky.
We present a novel two-stage risk object identification framework based on causal inference with the proposed object-level manipulable driving model.
Our framework achieves a substantial average performance boost over a strong baseline by 7.5%.
- Score: 19.71459945458985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A significant amount of people die in road accidents due to driver errors. To
reduce fatalities, developing intelligent driving systems assisting drivers to
identify potential risks is in an urgent need. Risky situations are generally
defined based on collision prediction in the existing works. However, collision
is only a source of potential risks, and a more generic definition is required.
In this work, we propose a novel driver-centric definition of risk, i.e.,
objects influencing drivers' behavior are risky. A new task called risk object
identification is introduced. We formulate the task as the cause-effect problem
and present a novel two-stage risk object identification framework based on
causal inference with the proposed object-level manipulable driving model. We
demonstrate favorable performance on risk object identification compared with
strong baselines on the Honda Research Institute Driving Dataset (HDD). Our
framework achieves a substantial average performance boost over a strong
baseline by 7.5%.
Related papers
- Quantifying detection rates for dangerous capabilities: a theoretical model of dangerous capability evaluations [47.698233647783965]
We present a quantitative model for tracking dangerous AI capabilities over time.
Our goal is to help the policy and research community visualise how dangerous capability testing can give us an early warning about approaching AI risks.
arXiv Detail & Related papers (2024-12-19T22:31:34Z) - Human-Based Risk Model for Improved Driver Support in Interactive Driving Scenarios [0.0]
We present a human-based risk model that uses driver information for improved driver support.
In extensive simulations, we show that our novel human-based risk model achieves earlier warning times and reduced warning errors.
arXiv Detail & Related papers (2024-10-03T02:10:13Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Context-Aware Quantitative Risk Assessment Machine Learning Model for
Drivers Distraction [0.0]
Multi-Class Driver Distraction Risk Assessment (MDDRA) model considers the vehicle, driver, and environmental data during a journey.
MDDRA categorises the driver on a risk matrix as safe, careless, or dangerous.
We apply machine learning techniques to classify and predict driver distraction according to severity levels.
arXiv Detail & Related papers (2024-02-20T23:20:36Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Considering Human Factors in Risk Maps for Robust and Foresighted Driver
Warning [1.4699455652461728]
We propose a warning system that uses human states in the form of driver errors.
The system consists of a behavior planner Risk Maps which directly changes its prediction of the surrounding driving situation.
In different simulations of a dynamic lane change and intersection scenarios, we show how the driver's behavior plan can become unsafe.
arXiv Detail & Related papers (2023-06-06T16:39:58Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [50.936478241688114]
Nonobjective driving experience is difficult to model, so a mechanism simulating the driver experience accumulation procedure is absent in existing methods.
We propose a FeedBack Loop Network (FBLNet), which attempts to model the driving experience accumulation procedure.
Our model exhibits a solid advantage over existing methods, achieving an outstanding performance improvement on two driver attention benchmark datasets.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Driver-centric Risk Object Identification [25.85690304998681]
We propose a driver-centric definition of risk, i.e., risky objects influence driver behavior.
We formulate the task as a cause-effect problem and present a novel two-stage risk object identification framework.
A driver-centric Risk Object Identification dataset is curated to evaluate the proposed system.
arXiv Detail & Related papers (2021-06-24T17:27:32Z) - Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for
Automated Driving using Distributional Reinforcement Learning [0.0]
We propose a two-step approach for risk-sensitive behavior generation for self-driving vehicles.
First, we learn an optimal policy in an uncertain environment with Deep Distributional Reinforcement Learning.
During execution, the optimal risk-sensitive action is selected by applying established risk criteria.
arXiv Detail & Related papers (2021-02-05T11:45:12Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.