An Analysis of Adversarial Attacks and Defenses on Autonomous Driving
Models
- URL: http://arxiv.org/abs/2002.02175v1
- Date: Thu, 6 Feb 2020 09:49:16 GMT
- Title: An Analysis of Adversarial Attacks and Defenses on Autonomous Driving
Models
- Authors: Yao Deng, Xi Zheng, Tianyi Zhang, Chen Chen, Guannan Lou, Miryung Kim
- Abstract summary: Convolutional neural network (CNN) is a key component in autonomous driving.
Previous work shows CNN-based classification models are vulnerable to adversarial attacks.
This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models.
- Score: 15.007794089091616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, autonomous driving has attracted much attention from both industry
and academia. Convolutional neural network (CNN) is a key component in
autonomous driving, which is also increasingly adopted in pervasive computing
such as smartphones, wearable devices, and IoT networks. Prior work shows
CNN-based classification models are vulnerable to adversarial attacks. However,
it is uncertain to what extent regression models such as driving models are
vulnerable to adversarial attacks, the effectiveness of existing defense
techniques, and the defense implications for system and middleware builders.
This paper presents an in-depth analysis of five adversarial attacks and four
defense methods on three driving models. Experiments show that, similar to
classification models, these models are still highly vulnerable to adversarial
attacks. This poses a big security threat to autonomous driving and thus should
be taken into account in practice. While these defense methods can effectively
defend against different attacks, none of them are able to provide adequate
protection against all five attacks. We derive several implications for system
and middleware builders: (1) when adding a defense component against
adversarial attacks, it is important to deploy multiple defense methods in
tandem to achieve a good coverage of various attacks, (2) a blackbox attack is
much less effective compared with a white-box attack, implying that it is
important to keep model details (e.g., model architecture, hyperparameters)
confidential via model obfuscation, and (3) driving models with a complex
architecture are preferred if computing resources permit as they are more
resilient to adversarial attacks than simple models.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Versatile Defense Against Adversarial Attacks on Image Recognition [2.9980620769521513]
Defending against adversarial attacks in a real-life setting can be compared to the way antivirus software works.
It appears that a defense method based on image-to-image translation may be capable of this.
The trained model has successfully improved the classification accuracy from nearly zero to an average of 86%.
arXiv Detail & Related papers (2024-03-13T01:48:01Z) - Improving behavior based authentication against adversarial attack using XAI [3.340314613771868]
We propose an eXplainable AI (XAI) based defense strategy against adversarial attacks in such scenarios.
A feature selector, trained with our method, can be used as a filter in front of the original authenticator.
We demonstrate that our XAI based defense strategy is effective against adversarial attacks and outperforms other defense strategies.
arXiv Detail & Related papers (2024-02-26T09:29:05Z) - Efficient Defense Against Model Stealing Attacks on Convolutional Neural
Networks [0.548924822963045]
Model stealing attacks can lead to intellectual property theft and other security and privacy risks.
Current state-of-the-art defenses against model stealing attacks suggest adding perturbations to the prediction probabilities.
We propose a simple yet effective and efficient defense alternative.
arXiv Detail & Related papers (2023-09-04T22:25:49Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - "What's in the box?!": Deflecting Adversarial Attacks by Randomly
Deploying Adversarially-Disjoint Models [71.91835408379602]
adversarial examples have been long considered a real threat to machine learning models.
We propose an alternative deployment-based defense paradigm that goes beyond the traditional white-box and black-box threat models.
arXiv Detail & Related papers (2021-02-09T20:07:13Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Omni: Automated Ensemble with Unexpected Models against Adversarial
Evasion Attack [35.0689225703137]
A machine learning-based security detection model is susceptible to adversarial evasion attacks.
We propose an approach called Omni to explore methods that create an ensemble of "unexpected models"
In studies with five types of adversarial evasion attacks, we show Omni is a promising approach as a defense strategy.
arXiv Detail & Related papers (2020-11-23T20:02:40Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Adversarial Imitation Attack [63.76805962712481]
A practical adversarial attack should require as little as possible knowledge of attacked models.
Current substitute attacks need pre-trained models to generate adversarial examples.
In this study, we propose a novel adversarial imitation attack.
arXiv Detail & Related papers (2020-03-28T10:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.