ROCAS: Root Cause Analysis of Autonomous Driving Accidents via Cyber-Physical Co-mutation
- URL: http://arxiv.org/abs/2409.07774v2
- Date: Fri, 13 Sep 2024 23:44:13 GMT
- Title: ROCAS: Root Cause Analysis of Autonomous Driving Accidents via Cyber-Physical Co-mutation
- Authors: Shiwei Feng, Yapeng Ye, Qingkai Shi, Zhiyuan Cheng, Xiangzhe Xu, Siyuan Cheng, Hongjun Choi, Xiangyu Zhang,
- Abstract summary: Existing cyber-physical system (CPS) root cause analysis techniques are mainly designed for drones.
We introduce ROCAS, a novel ADS root cause analysis framework featuring cyber-physical co-mutation.
We study 12 categories of ADS accidents and demonstrate the effectiveness and efficiency of ROCAS.
- Score: 16.76106822218872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Autonomous driving systems (ADS) have transformed our daily life, safety of ADS is of growing significance. While various testing approaches have emerged to enhance the ADS reliability, a crucial gap remains in understanding the accidents causes. Such post-accident analysis is paramount and beneficial for enhancing ADS safety and reliability. Existing cyber-physical system (CPS) root cause analysis techniques are mainly designed for drones and cannot handle the unique challenges introduced by more complex physical environments and deep learning models deployed in ADS. In this paper, we address the gap by offering a formal definition of ADS root cause analysis problem and introducing ROCAS, a novel ADS root cause analysis framework featuring cyber-physical co-mutation. Our technique uniquely leverages both physical and cyber mutation that can precisely identify the accident-trigger entity and pinpoint the misconfiguration of the target ADS responsible for an accident. We further design a differential analysis to identify the responsible module to reduce search space for the misconfiguration. We study 12 categories of ADS accidents and demonstrate the effectiveness and efficiency of ROCAS in narrowing down search space and pinpointing the misconfiguration. We also show detailed case studies on how the identified misconfiguration helps understand rationale behind accidents.
Related papers
- Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - Characterization and Mitigation of Insufficiencies in Automated Driving Systems [0.5842419815638352]
Automated Driving (AD) systems have the potential to increase safety, comfort and energy efficiency.
The commercial deployment and wide adoption of ADS have been moderate, partially due to system functional insufficiencies (FI) that undermine passenger safety and lead to hazardous situations on the road.
This study aims to formulate a generic architectural design pattern to improve FI mitigation and enable faster commercial deployment of ADS.
arXiv Detail & Related papers (2024-04-15T08:19:13Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Towards Automated Driving Violation Cause Analysis in Scenario-Based
Testing for Autonomous Driving Systems [22.872694649245044]
We propose a novel driving violation cause analysis (DVCA) tool.
Our tool can achieve perfect component-level attribution accuracy (100%) and almost (>98%) perfect message-level accuracy.
arXiv Detail & Related papers (2024-01-19T01:12:37Z) - ACAV: A Framework for Automatic Causality Analysis in Autonomous Vehicle
Accident Recordings [5.578446693797519]
Recent fatalities have emphasized the importance of safety validation through large-scale testing.
We propose ACAV, an automated framework designed to conduct causality analysis for AV accident recordings.
We evaluate ACAV on the Apollo ADS, finding that it can identify five distinct types of causal events in 93.64% of 110 accident recordings.
arXiv Detail & Related papers (2024-01-13T12:41:05Z) - An Explainable Ensemble-based Intrusion Detection System for Software-Defined Vehicle Ad-hoc Networks [0.0]
In this study, we explore the detection of cyber threats in vehicle networks through ensemble-based machine learning.
We propose a model that uses Random Forest and CatBoost as our main investigators, with Logistic Regression used to then reason on their outputs to make a final decision.
We observe that our approach improves classification accuracy, and results in fewer misclassifications compared to previous works.
arXiv Detail & Related papers (2023-12-08T10:39:18Z) - Causal Structure Learning with Recommendation System [46.90516308311924]
We first formulate the underlying causal mechanism as a causal structural model and describe a general causal structure learning framework grounded in the real-world working mechanism of recommendation systems.
We then derive the learning objective from our framework and propose an augmented Lagrangian solver for efficient optimization.
arXiv Detail & Related papers (2022-10-19T02:31:47Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and
Defenses [13.161104978510943]
This survey provides a thorough analysis of different attacks that may jeopardize autonomous driving systems.
It covers adversarial attacks for various deep learning models and attacks in both physical and cyber context.
Some promising research directions are suggested in order to improve deep learning-based autonomous driving safety.
arXiv Detail & Related papers (2021-04-05T06:31:47Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.