MarMot: Metamorphic Runtime Monitoring of Autonomous Driving Systems
- URL: http://arxiv.org/abs/2310.07414v3
- Date: Fri, 6 Sep 2024 10:29:41 GMT
- Title: MarMot: Metamorphic Runtime Monitoring of Autonomous Driving Systems
- Authors: Jon Ayerdi, Asier Iriarte, Pablo Valle, Ibai Roman, Miren Illarramendi, Aitor Arrieta,
- Abstract summary: We propose MarMot, an online monitoring approach for Autonomous Driving Systems (ADSs) based on Metamorphic Relations (MRs)
MarMot estimates the uncertainty of the ADS at runtime, allowing the identification of anomalous situations that are likely to cause a faulty behavior of the ADS.
- Score: 5.992452923559031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous Driving Systems (ADSs) are complex Cyber-Physical Systems (CPSs) that must ensure safety even in uncertain conditions. Modern ADSs often employ Deep Neural Networks (DNNs), which may not produce correct results in every possible driving scenario. Thus, an approach to estimate the confidence of an ADS at runtime is necessary to prevent potentially dangerous situations. In this paper we propose MarMot, an online monitoring approach for ADSs based on Metamorphic Relations (MRs), which are properties of a system that hold among multiple inputs and the corresponding outputs. Using domain-specific MRs, MarMot estimates the uncertainty of the ADS at runtime, allowing the identification of anomalous situations that are likely to cause a faulty behavior of the ADS, such as driving off the road. We perform an empirical assessment of MarMot with five different MRs, using two different subject ADSs, including a small-scale physical ADS and a simulated ADS. Our evaluation encompasses the identification of both external anomalies, e.g., fog, as well as internal anomalies, e.g., faulty DNNs due to mislabeled training data. Our results show that MarMot can identify up to 65\% of the external anomalies and 100\% of the internal anomalies in the physical ADS, and up to 54\% of the external anomalies and 88\% of the internal anomalies in the simulated ADS. With these results, MarMot outperforms or is comparable to other state-of-the-art approaches, including SelfOracle, Ensemble, and MC Dropout-based ADS monitors.
Related papers
- Testing the Fault-Tolerance of Multi-Sensor Fusion Perception in Autonomous Driving Systems [14.871090150807929]
We build fault models for cameras and LiDAR in AVs and inject them into the MSF perception-based ADS to test its behaviors in test scenarios.
We design a feedback-guided differential fuzzer to discover the safety violations of MSF perception-based ADS caused by the injected sensor faults.
arXiv Detail & Related papers (2025-04-18T02:37:55Z) - MARL-OT: Multi-Agent Reinforcement Learning Guided Online Fuzzing to Detect Safety Violation in Autonomous Driving Systems [1.1677228160050082]
This paper introduces MARL-OT, a scalable framework that leverages MARL to detect safety violations of Autonomous Driving Systems (ADSs)
MARL-OT employs MARL for high-level guidance, triggering various dangerous scenarios for the rule-based online fuzzer to explore potential safety violations of ADSs.
Our approach improves the detected safety violation rate by up to 136.2% compared to the state-of-the-art (SOTA) testing technique.
arXiv Detail & Related papers (2025-01-24T12:34:04Z) - Facial Action Unit Detection by Adaptively Constraining Self-Attention and Causally Deconfounding Sample [53.23474626420103]
Facial action unit (AU) detection remains a challenging task, due to the subtlety, dynamics, and diversity of AUs.
We propose a novel AU detection framework called AC2D by adaptively constraining self-attention weight distribution.
Our method achieves competitive performance compared to state-of-the-art AU detection approaches on challenging benchmarks.
arXiv Detail & Related papers (2024-10-02T05:51:24Z) - Evaluating the Robustness of LiDAR-based 3D Obstacles Detection and Its Impacts on Autonomous Driving Systems [4.530172587010801]
We study the impact of built-in inaccuracies in LiDAR sensors on LiDAR-3D obstacle detection models.
We apply ET to evaluate the robustness of five classic LiDAR-3D obstacle detection models.
We find that even very subtle changes in point cloud data may introduce a non-trivial decrease in the detection performance.
arXiv Detail & Related papers (2024-08-24T19:10:07Z) - Characterization and Mitigation of Insufficiencies in Automated Driving Systems [0.5842419815638352]
Automated Driving (AD) systems have the potential to increase safety, comfort and energy efficiency.
The commercial deployment and wide adoption of ADS have been moderate, partially due to system functional insufficiencies (FI) that undermine passenger safety and lead to hazardous situations on the road.
This study aims to formulate a generic architectural design pattern to improve FI mitigation and enable faster commercial deployment of ADS.
arXiv Detail & Related papers (2024-04-15T08:19:13Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Boundary State Generation for Testing and Improvement of Autonomous Driving Systems [8.670873561640903]
We present GENBO, a novel test generator for autonomous driving systems (ADSs) testing.
We use such boundary conditions to augment the initial training dataset and retrain the DNN model under test.
Our evaluation results show that the retrained model has, on average, up to 3x higher success rate on a separate set of evaluation tracks with respect to the original DNN model.
arXiv Detail & Related papers (2023-07-20T05:07:51Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - DeepGuard: A Framework for Safeguarding Autonomous Driving Systems from
Inconsistent Behavior [0.1529342790344802]
The deep neural networks (DNNs)based autonomous driving systems (ADSs) are expected to reduce road accidents and improve safety in the transportation domain.
DNN based ADS sometimes exhibit erroneous or unexpected behaviors due to unexpected driving conditions which may cause accidents.
This study proposes an autoencoder and time series analysis based anomaly detection system to prevent the safety critical inconsistent behavior of autonomous vehicles at runtime.
arXiv Detail & Related papers (2021-11-18T06:00:54Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Hidden Incentives for Auto-Induced Distributional Shift [11.295927026302573]
We introduce the term auto-induced distributional shift (ADS) to describe the phenomenon of an algorithm causing a change in the distribution of its own inputs.
Our goal is to ensure that machine learning systems do not leverage ADS to increase performance when doing so could be undesirable.
We demonstrate that changes to the learning algorithm, such as the introduction of meta-learning, can cause hidden incentives for auto-induced distributional shift (HI-ADS) to be revealed.
arXiv Detail & Related papers (2020-09-19T03:31:27Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.