Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference
- URL: http://arxiv.org/abs/2003.02425v2
- Date: Mon, 3 Aug 2020 04:28:21 GMT
- Title: Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk
Object Identification via Causal Inference
- Authors: Chengxi Li, Stanley H. Chan and Yi-Ting Chen
- Abstract summary: We propose a driver-centric definition of risk, i.e., objects influencing drivers' behavior are risky.
We present a novel two-stage risk object identification framework based on causal inference with the proposed object-level manipulable driving model.
Our framework achieves a substantial average performance boost over a strong baseline by 7.5%.
- Score: 19.71459945458985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A significant amount of people die in road accidents due to driver errors. To
reduce fatalities, developing intelligent driving systems assisting drivers to
identify potential risks is in an urgent need. Risky situations are generally
defined based on collision prediction in the existing works. However, collision
is only a source of potential risks, and a more generic definition is required.
In this work, we propose a novel driver-centric definition of risk, i.e.,
objects influencing drivers' behavior are risky. A new task called risk object
identification is introduced. We formulate the task as the cause-effect problem
and present a novel two-stage risk object identification framework based on
causal inference with the proposed object-level manipulable driving model. We
demonstrate favorable performance on risk object identification compared with
strong baselines on the Honda Research Institute Driving Dataset (HDD). Our
framework achieves a substantial average performance boost over a strong
baseline by 7.5%.
Related papers
- Human-Based Risk Model for Improved Driver Support in Interactive Driving Scenarios [0.0]
We present a human-based risk model that uses driver information for improved driver support.
In extensive simulations, we show that our novel human-based risk model achieves earlier warning times and reduced warning errors.
arXiv Detail & Related papers (2024-10-03T02:10:13Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Context-Aware Quantitative Risk Assessment Machine Learning Model for
Drivers Distraction [0.0]
Multi-Class Driver Distraction Risk Assessment (MDDRA) model considers the vehicle, driver, and environmental data during a journey.
MDDRA categorises the driver on a risk matrix as safe, careless, or dangerous.
We apply machine learning techniques to classify and predict driver distraction according to severity levels.
arXiv Detail & Related papers (2024-02-20T23:20:36Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Considering Human Factors in Risk Maps for Robust and Foresighted Driver
Warning [1.4699455652461728]
We propose a warning system that uses human states in the form of driver errors.
The system consists of a behavior planner Risk Maps which directly changes its prediction of the surrounding driving situation.
In different simulations of a dynamic lane change and intersection scenarios, we show how the driver's behavior plan can become unsafe.
arXiv Detail & Related papers (2023-06-06T16:39:58Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Driver-centric Risk Object Identification [25.85690304998681]
We propose a driver-centric definition of risk, i.e., risky objects influence driver behavior.
We formulate the task as a cause-effect problem and present a novel two-stage risk object identification framework.
A driver-centric Risk Object Identification dataset is curated to evaluate the proposed system.
arXiv Detail & Related papers (2021-06-24T17:27:32Z) - Addressing Inherent Uncertainty: Risk-Sensitive Behavior Generation for
Automated Driving using Distributional Reinforcement Learning [0.0]
We propose a two-step approach for risk-sensitive behavior generation for self-driving vehicles.
First, we learn an optimal policy in an uncertain environment with Deep Distributional Reinforcement Learning.
During execution, the optimal risk-sensitive action is selected by applying established risk criteria.
arXiv Detail & Related papers (2021-02-05T11:45:12Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.