Grasping Causality for the Explanation of Criticality for Automated
Driving
- URL: http://arxiv.org/abs/2210.15375v1
- Date: Thu, 27 Oct 2022 12:37:00 GMT
- Title: Grasping Causality for the Explanation of Criticality for Automated
Driving
- Authors: Tjark Koopmann and Christian Neurohr and Lina Putze and Lukas
Westhofen and Roman Gansch and Ahmad Adee
- Abstract summary: This work introduces a formalization of causal queries whose answers facilitate a causal understanding of safety-relevant influencing factors for automated driving.
Based on Judea Pearl's causal theory, we define a causal relation as a causal structure together with a context.
As availability and quality of data are imperative for validly estimating answers to the causal queries, we also discuss requirements on real-world and synthetic data acquisition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The verification and validation of automated driving systems at SAE levels 4
and 5 is a multi-faceted challenge for which classical statistical
considerations become infeasible. For this, contemporary approaches suggest a
decomposition into scenario classes combined with statistical analysis thereof
regarding the emergence of criticality. Unfortunately, these associational
approaches may yield spurious inferences, or worse, fail to recognize the
causalities leading to critical scenarios, which are, in turn, prerequisite for
the development and safeguarding of automated driving systems. As to
incorporate causal knowledge within these processes, this work introduces a
formalization of causal queries whose answers facilitate a causal understanding
of safety-relevant influencing factors for automated driving. This formalized
causal knowledge can be used to specify and implement abstract safety
principles that provably reduce the criticality associated with these
influencing factors. Based on Judea Pearl's causal theory, we define a causal
relation as a causal structure together with a context, both related to a
domain ontology, where the focus lies on modeling the effect of such
influencing factors on criticality as measured by a suitable metric. As to
assess modeling quality, we suggest various quantities and evaluate them on a
small example. As availability and quality of data are imperative for validly
estimating answers to the causal queries, we also discuss requirements on
real-world and synthetic data acquisition. We thereby contribute to
establishing causal considerations at the heart of the safety processes that
are urgently needed as to ensure the safe operation of automated driving
systems.
Related papers
- Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - Formalized Identification Of Key Factors In Safety-Relevant Failure
Scenarios [0.0]
This research article presents a data-based approach to systematically identify key factors in safety-related failure scenarios.
The approach involves a derivation of influencing factors based on information from failure databases.
The research demonstrates a robust method for identifying key factors in safety-related failure scenarios using information from failure databases.
arXiv Detail & Related papers (2024-02-28T09:28:36Z) - STEAM & MoSAFE: SOTIF Error-and-Failure Model & Analysis for AI-Enabled
Driving Automation [4.820785104084241]
This paper defines the SOTIF Temporal Error and Failure Model (STEAM) as a refinement of the SOTIF cause-and-effect model.
Second, this paper proposes the Model-based SOTIF Analysis of Failures and Errors (MoSAFE) method, which allows instantiating STEAM based on system-design models.
arXiv Detail & Related papers (2023-12-15T06:34:35Z) - Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation [57.351098530477124]
We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
arXiv Detail & Related papers (2023-07-15T23:53:37Z) - On a Uniform Causality Model for Industrial Automation [61.303828551910634]
A Uniform Causality Model for various application areas of industrial automation is proposed.
The resulting model describes the behavior of Cyber-Physical Systems mathematically.
It is shown that the model can work as a basis for the application of new approaches in industrial automation that focus on machine learning.
arXiv Detail & Related papers (2022-09-20T11:23:51Z) - Using Ontologies for the Formalization and Recognition of Criticality
for Automated Driving [0.0]
Recent advances suggest the ability to leverage relevant knowledge in handling the inherently open and complex context of the traffic world.
This paper demonstrates to be a powerful tool for modeling and formalization of factors associated with criticality in the environment of automated vehicles.
We elaborate on the modular approach, present a publicly available implementation, and evaluate the method by means of a large-scale drone data set of urban traffic scenarios.
arXiv Detail & Related papers (2022-05-03T14:32:11Z) - Trying to Outrun Causality with Machine Learning: Limitations of Model
Explainability Techniques for Identifying Predictive Variables [7.106986689736828]
We show that machine learning algorithms are not as flexible as they might seem, and are instead incredibly sensitive to the underling causal structure in the data.
We provide some alternative recommendations for researchers wanting to explore the data for important variables.
arXiv Detail & Related papers (2022-02-20T17:48:54Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - CausalAF: Causal Autoregressive Flow for Safety-Critical Driving
Scenario Generation [34.45216283597149]
We propose a flow-based generative framework, Causal Autoregressive Flow (CausalAF)
CausalAF encourages the generative model to uncover and follow the causal relationship among generated objects.
We show that using generated scenarios as additional training samples empirically improves the robustness of autonomous driving algorithms.
arXiv Detail & Related papers (2021-10-26T18:07:48Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.