CausalAF: Causal Autoregressive Flow for Safety-Critical Driving
Scenario Generation
- URL: http://arxiv.org/abs/2110.13939v3
- Date: Sat, 19 Aug 2023 19:34:30 GMT
- Title: CausalAF: Causal Autoregressive Flow for Safety-Critical Driving
Scenario Generation
- Authors: Wenhao Ding, Haohong Lin, Bo Li, Ding Zhao
- Abstract summary: We propose a flow-based generative framework, Causal Autoregressive Flow (CausalAF)
CausalAF encourages the generative model to uncover and follow the causal relationship among generated objects.
We show that using generated scenarios as additional training samples empirically improves the robustness of autonomous driving algorithms.
- Score: 34.45216283597149
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating safety-critical scenarios, which are crucial yet difficult to
collect, provides an effective way to evaluate the robustness of autonomous
driving systems. However, the diversity of scenarios and efficiency of
generation methods are heavily restricted by the rareness and structure of
safety-critical scenarios. Therefore, existing generative models that only
estimate distributions from observational data are not satisfying to solve this
problem. In this paper, we integrate causality as a prior into the scenario
generation and propose a flow-based generative framework, Causal Autoregressive
Flow (CausalAF). CausalAF encourages the generative model to uncover and follow
the causal relationship among generated objects via novel causal masking
operations instead of searching the sample only from observational data. By
learning the cause-and-effect mechanism of how the generated scenario causes
risk situations rather than just learning correlations from data, CausalAF
significantly improves learning efficiency. Extensive experiments on three
heterogeneous traffic scenarios illustrate that CausalAF requires much fewer
optimization resources to effectively generate safety-critical scenarios. We
also show that using generated scenarios as additional training samples
empirically improves the robustness of autonomous driving algorithms.
Related papers
- Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Adversarial Safety-Critical Scenario Generation using Naturalistic Human Driving Priors [2.773055342671194]
We introduce a natural adversarial scenario generation solution using naturalistic human driving priors and reinforcement learning techniques.
Our findings demonstrate that the proposed model can generate realistic safety-critical test scenarios covering both naturalness and adversariality.
arXiv Detail & Related papers (2024-08-06T13:58:56Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation [57.351098530477124]
We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
arXiv Detail & Related papers (2023-07-15T23:53:37Z) - Causal Flow-based Variational Auto-Encoder for Disentangled Causal Representation Learning [1.4875602190483512]
Disentangled representation learning aims to learn low-dimensional representations of data, where each dimension corresponds to an underlying generative factor.
We design a new VAE-based framework named Disentangled Causal Variational Auto-Encoder (DCVAE)
DCVAE includes a variant of autoregressive flows known as causal flows, capable of learning effective causal disentangled representations.
arXiv Detail & Related papers (2023-04-18T14:26:02Z) - Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic
Prior [135.78858513845233]
STRIVE is a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions.
To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE.
A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner.
arXiv Detail & Related papers (2021-12-09T18:03:27Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Multimodal Safety-Critical Scenarios Generation for Decision-Making
Algorithms Evaluation [23.43175124406634]
Existing neural network-based autonomous systems are shown to be vulnerable against adversarial attacks.
We propose a flow-based multimodal safety-critical scenario generator for evaluating decisionmaking algorithms.
We evaluate six Reinforcement Learning algorithms with our generated traffic scenarios and provide empirical conclusions about their robustness.
arXiv Detail & Related papers (2020-09-16T15:16:43Z) - Learning to Collide: An Adaptive Safety-Critical Scenarios Generating
Method [20.280573307366627]
We propose a generative framework to create safety-critical scenarios for evaluating task algorithms.
We demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods.
arXiv Detail & Related papers (2020-03-02T21:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.