Black-box Adversarial Attacks in Autonomous Vehicle Technology
- URL: http://arxiv.org/abs/2101.06092v1
- Date: Fri, 15 Jan 2021 13:18:18 GMT
- Title: Black-box Adversarial Attacks in Autonomous Vehicle Technology
- Authors: K Naveen Kumar, C Vishnu, Reshmi Mitra, C Krishna Mohan
- Abstract summary: Black-box adversarial attacks cause drastic misclassification in critical scene elements leading the autonomous vehicle to crash into other vehicles or pedestrians.
We propose a novel query-based attack method called Modified Simple black-box attack (M-SimBA) to overcome the use of a white-box source in transfer based attack method.
We show that the proposed model outperforms the existing models like Transfer-based projected gradient descent (T-PGD), SimBA in terms of convergence time, flattening the distribution of confused class probability, and producing adversarial samples with least confidence on the true class.
- Score: 4.215251065887861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the high quality performance of the deep neural network in real-world
applications, they are susceptible to minor perturbations of adversarial
attacks. This is mostly undetectable to human vision. The impact of such
attacks has become extremely detrimental in autonomous vehicles with real-time
"safety" concerns. The black-box adversarial attacks cause drastic
misclassification in critical scene elements such as road signs and traffic
lights leading the autonomous vehicle to crash into other vehicles or
pedestrians. In this paper, we propose a novel query-based attack method called
Modified Simple black-box attack (M-SimBA) to overcome the use of a white-box
source in transfer based attack method. Also, the issue of late convergence in
a Simple black-box attack (SimBA) is addressed by minimizing the loss of the
most confused class which is the incorrect class predicted by the model with
the highest probability, instead of trying to maximize the loss of the correct
class. We evaluate the performance of the proposed approach to the German
Traffic Sign Recognition Benchmark (GTSRB) dataset. We show that the proposed
model outperforms the existing models like Transfer-based projected gradient
descent (T-PGD), SimBA in terms of convergence time, flattening the
distribution of confused class probability, and producing adversarial samples
with least confidence on the true class.
Related papers
- BruSLeAttack: A Query-Efficient Score-Based Black-Box Sparse Adversarial Attack [22.408968332454062]
We study the unique, less-well understood problem of generating sparse adversarial samples simply by observing the score-based replies to model queries.
We develop the BruSLeAttack-a new, faster (more query-efficient) algorithm for the problem.
Our work facilitates faster evaluation of model vulnerabilities and raises our vigilance on the safety, security and reliability of deployed systems.
arXiv Detail & Related papers (2024-04-08T08:59:26Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition [99.29804193431823]
Black-box adversarial attacks present a realistic threat to action recognition systems.
We propose a new attack on action recognition that addresses these shortcomings by generating perturbations.
Our method achieves 8% and higher 12% deception rates compared to state-of-the-art query-based and transfer-based attacks.
arXiv Detail & Related papers (2022-11-23T17:47:49Z) - T-SEA: Transfer-based Self-Ensemble Attack on Object Detection [9.794192858806905]
We propose a single-model transfer-based black-box attack on object detection, utilizing only one model to achieve a high-transferability adversarial attack on multiple black-box detectors.
We analogize patch optimization with regular model optimization, proposing a series of self-ensemble approaches on the input data, the attacked model, and the adversarial patch.
arXiv Detail & Related papers (2022-11-16T10:27:06Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Practical No-box Adversarial Attacks with Training-free Hybrid Image
Transformation [123.33816363589506]
We show the existence of a textbftraining-free adversarial perturbation under the no-box threat model.
Motivated by our observation that high-frequency component (HFC) domains in low-level features, we attack an image mainly by manipulating its frequency components.
Our method is even competitive to mainstream transfer-based black-box attacks.
arXiv Detail & Related papers (2022-03-09T09:51:00Z) - Query Efficient Decision Based Sparse Attacks Against Black-Box Deep
Learning Models [9.93052896330371]
We develop an evolution-based algorithm-SparseEvo-for the problem and evaluate against both convolutional deep neural networks and vision transformers.
SparseEvo requires significantly fewer model queries than the state-of-the-art sparse attack Pointwise for both untargeted and targeted attacks.
Importantly, the query efficient SparseEvo, along with decision-based attacks, in general raise new questions regarding the safety of deployed systems.
arXiv Detail & Related papers (2022-01-31T21:10:47Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z) - Learning Black-Box Attackers with Transferable Priors and Query Feedback [40.41083684665537]
This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available.
Inspired by consistency of visual saliency between different vision models, a surrogate model is expected to improve the attack performance via transferability.
We propose a surprisingly simple baseline approach (named SimBA++) using the surrogate model, which significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2020-10-21T05:43:11Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.