Detecting Adversarial Examples in Learning-Enabled Cyber-Physical
Systems using Variational Autoencoder for Regression
- URL: http://arxiv.org/abs/2003.10804v1
- Date: Sat, 21 Mar 2020 11:15:33 GMT
- Title: Detecting Adversarial Examples in Learning-Enabled Cyber-Physical
Systems using Variational Autoencoder for Regression
- Authors: Feiyang Cai and Jiani Li and Xenofon Koutsoukos
- Abstract summary: It has been shown that deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction.
The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS.
We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars.
- Score: 4.788163807490198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-enabled components (LECs) are widely used in cyber-physical systems
(CPS) since they can handle the uncertainty and variability of the environment
and increase the level of autonomy. However, it has been shown that LECs such
as deep neural networks (DNN) are not robust and adversarial examples can cause
the model to make a false prediction. The paper considers the problem of
efficiently detecting adversarial examples in LECs used for regression in CPS.
The proposed approach is based on inductive conformal prediction and uses a
regression model based on variational autoencoder. The architecture allows to
take into consideration both the input and the neural network prediction for
detecting adversarial, and more generally, out-of-distribution examples. We
demonstrate the method using an advanced emergency braking system implemented
in an open source simulator for self-driving cars where a DNN is used to
estimate the distance to an obstacle. The simulation results show that the
method can effectively detect adversarial examples with a short detection
delay.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Improving Prediction Confidence in Learning-Enabled Autonomous Systems [2.66512000865131]
We utilize a feedback loop between learning-enabled components used for classification and the sensors of an autonomous system in order to improve the confidence of the predictions.
We design a classifier using Inductive Conformal Prediction (ICP) based on a triplet network architecture in order to learn representations that can be used to quantify the similarity between test and training examples.
A feedback loop that queries the sensors for a new input is used to further refine the predictions and increase the classification accuracy.
arXiv Detail & Related papers (2021-10-07T00:40:34Z) - Out-of-Distribution Example Detection in Deep Neural Networks using
Distance to Modelled Embedding [0.0]
We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time.
By approximating the training set embedding into feature space as a linear hyperplane, we derive a simple, unsupervised, highly performant and computationally efficient method.
arXiv Detail & Related papers (2021-08-24T12:28:04Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Detection of Dataset Shifts in Learning-Enabled Cyber-Physical Systems
using Variational Autoencoder for Regression [1.5039745292757671]
We propose an approach to detect the dataset shifts effectively for regression problems.
Our approach is based on the inductive conformal anomaly detection and utilizes a variational autoencoder for regression model.
We demonstrate our approach by using an advanced emergency braking system implemented in an open-source simulator for self-driving cars.
arXiv Detail & Related papers (2021-04-14T03:46:37Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Cross-Validation and Uncertainty Determination for Randomized Neural
Networks with Applications to Mobile Sensors [0.0]
Extreme learning machines provide an attractive and efficient method for supervised learning under limited computing ressources and green machine learning.
Results are discussed about supervised learning with such networks and regression methods in terms of consistency and bounds for the generalization and prediction error.
arXiv Detail & Related papers (2021-01-06T12:28:06Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Real-time Out-of-distribution Detection in Learning-Enabled
Cyber-Physical Systems [1.4213973379473654]
Cyber-physical systems benefit by using machine learning components that can handle the uncertainty and variability of the real-world.
Deep neural networks, however, introduce new types of hazards that may impact system safety.
Out-of-distribution data may lead to a large error and compromise safety.
arXiv Detail & Related papers (2020-01-28T17:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.