A Hybrid Defense Method against Adversarial Attacks on Traffic Sign
Classifiers in Autonomous Vehicles
- URL: http://arxiv.org/abs/2205.01225v1
- Date: Mon, 25 Apr 2022 02:13:31 GMT
- Title: A Hybrid Defense Method against Adversarial Attacks on Traffic Sign
Classifiers in Autonomous Vehicles
- Authors: Zadid Khan, Mashrur Chowdhury, Sakib Mahmud Khan
- Abstract summary: Adversarial attacks can make deep neural network (DNN) models predict incorrect output labels for autonomous vehicles (AVs)
This study develops a resilient traffic sign classifier for AVs that uses a hybrid defense method.
We find that our hybrid defense method achieves 99% average traffic sign classification accuracy for the no attack scenario and 88% average traffic sign classification accuracy for all attack scenarios.
- Score: 4.585587646404074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks can make deep neural network (DNN) models predict
incorrect output labels, such as misclassified traffic signs, for autonomous
vehicle (AV) perception modules. Resilience against adversarial attacks can
help AVs navigate safely on the road by avoiding misclassication of signs or
objects. This DNN-based study develops a resilient traffic sign classifier for
AVs that uses a hybrid defense method. We use transfer learning to retrain the
Inception-V3 and Resnet-152 models as traffic sign classifiers. This method
also utilizes a combination of three different strategies: random filtering,
ensembling, and local feature mapping. We use the random cropping and resizing
technique for random filtering, plurality voting as ensembling strategy and an
optical character recognition model as a local feature mapper. This DNN-based
hybrid defense method has been tested for the no attack scenario and against
well-known untargeted adversarial attacks (e.g., Projected Gradient Descent or
PGD, Fast Gradient Sign Method or FGSM, Momentum Iterative Method or MIM
attack, and Carlini and Wagner or C&W). We find that our hybrid defense method
achieves 99% average traffic sign classification accuracy for the no attack
scenario and 88% average traffic sign classification accuracy for all attack
scenarios. Moreover, the hybrid defense method, presented in this study,
improves the accuracy for traffic sign classification compared to the
traditional defense methods (i.e., JPEG filtering, feature squeezing, binary
filtering, and random filtering) up to 6%, 50%, and 55% for FGSM, MIM, and PGD
attacks, respectively.
Related papers
- A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System [2.962613983209398]
Authors present how a generative adversarial network-based deepfake attack can be crafted to fool the AV traffic sign classification systems.
They develop a deepfake traffic sign image detection strategy leveraging hybrid quantum-classical neural networks (NNs)
The results indicate that the hybrid quantum-classical NNs for deepfake detection could achieve similar or higher performance than the baseline classical convolutional NNs in most cases.
arXiv Detail & Related papers (2024-09-25T19:44:56Z) - Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous
Driving: An Inductive Logic Programming Approach [0.0]
We propose an ILP-based approach for stop sign detection in Autonomous Vehicles.
It is more robust against adversarial attacks, as it mimics human-like perception.
It is able to correctly identify all targeted stop signs, even in the presence of PR2 and ADvCam attacks.
arXiv Detail & Related papers (2023-08-30T09:05:52Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Using Anomaly Feature Vectors for Detecting, Classifying and Warning of
Outlier Adversarial Examples [4.096598295525345]
We present DeClaW, a system for detecting, classifying, and warning of adversarial inputs presented to a classification neural network.
Preliminary findings suggest that AFVs can help distinguish among several types of adversarial attacks with close to 93% accuracy on the CIFAR-10 dataset.
arXiv Detail & Related papers (2021-07-01T16:00:09Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.