Robustness testing of AI systems: A case study for traffic sign
recognition
- URL: http://arxiv.org/abs/2108.06159v1
- Date: Fri, 13 Aug 2021 10:29:09 GMT
- Title: Robustness testing of AI systems: A case study for traffic sign
recognition
- Authors: Christian Berghoff and Pavol Bielik and Matthias Neu and Petar Tsankov
and Arndt von Twickel
- Abstract summary: This paper presents how the robustness of AI systems can be practically examined and which methods and metrics can be used to do so.
The robustness testing methodology is described and analysed for the example use case of traffic sign recognition in autonomous driving.
- Score: 13.395753930904108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last years, AI systems, in particular neural networks, have seen a
tremendous increase in performance, and they are now used in a broad range of
applications. Unlike classical symbolic AI systems, neural networks are trained
using large data sets and their inner structure containing possibly billions of
parameters does not lend itself to human interpretation. As a consequence, it
is so far not feasible to provide broad guarantees for the correct behaviour of
neural networks during operation if they process input data that significantly
differ from those seen during training. However, many applications of AI
systems are security- or safety-critical, and hence require obtaining
statements on the robustness of the systems when facing unexpected events,
whether they occur naturally or are induced by an attacker in a targeted way.
As a step towards developing robust AI systems for such applications, this
paper presents how the robustness of AI systems can be practically examined and
which methods and metrics can be used to do so. The robustness testing
methodology is described and analysed for the example use case of traffic sign
recognition in autonomous driving.
Related papers
- Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Autoencoder-based Semantic Novelty Detection: Towards Dependable
AI-based Systems [3.0799158006789056]
We propose a new architecture for autoencoder-based semantic novelty detection.
We demonstrate that such a semantic novelty detection outperforms autoencoder-based novelty detection approaches known from literature.
arXiv Detail & Related papers (2021-08-24T17:27:19Z) - How to Reach Real-Time AI on Consumer Devices? Solutions for
Programmable and Custom Architectures [7.085772863979686]
Deep neural networks (DNNs) have led to large strides in various Artificial Intelligence (AI) inference tasks, such as object and speech recognition.
deploying such AI models across commodity devices faces significant challenges.
We present techniques for achieving real-time performance following a cross-stack approach.
arXiv Detail & Related papers (2021-06-21T11:23:12Z) - Understanding Neural Code Intelligence Through Program Simplification [3.9704927572880253]
We propose a model-agnostic approach to identify critical input features for models in code intelligence systems.
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model.
We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior.
arXiv Detail & Related papers (2021-06-07T05:44:29Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Out-of-Distribution Detection for Automotive Perception [58.34808836642603]
Neural networks (NNs) are widely used for object classification in autonomous driving.
NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data.
This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference.
arXiv Detail & Related papers (2020-11-03T01:46:35Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Experimental Review of Neural-based approaches for Network Intrusion
Management [8.727349339883094]
We provide an experimental-based review of neural-based methods applied to intrusion detection issues.
We offer a complete view of the most prominent neural-based techniques relevant to intrusion detection, including deep-based approaches or weightless neural networks.
Our evaluation quantifies the value of neural networks, particularly when state-of-the-art datasets are used to train the models.
arXiv Detail & Related papers (2020-09-18T18:32:24Z) - Detecting Adversarial Examples in Learning-Enabled Cyber-Physical
Systems using Variational Autoencoder for Regression [4.788163807490198]
It has been shown that deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction.
The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS.
We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars.
arXiv Detail & Related papers (2020-03-21T11:15:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.