Attackar: Attack of the Evolutionary Adversary
- URL: http://arxiv.org/abs/2208.08297v1
- Date: Wed, 17 Aug 2022 13:57:23 GMT
- Title: Attackar: Attack of the Evolutionary Adversary
- Authors: Raz Lapid, Zvika Haramaty, Moshe Sipper
- Abstract summary: This paper introduces textitAttackar, an evolutionary, score-based, black-box attack.
Attackar is based on a novel objective function that can be used in gradient-free optimization problems.
Our results demonstrate the superior performance of Attackar, both in terms of accuracy score and query efficiency.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are sensitive to adversarial data in a variety of
scenarios, including the black-box scenario, where the attacker is only allowed
to query the trained model and receive an output. Existing black-box methods
for creating adversarial instances are costly, often using gradient estimation
or training a replacement network. This paper introduces \textit{Attackar}, an
evolutionary, score-based, black-box attack. Attackar is based on a novel
objective function that can be used in gradient-free optimization problems. The
attack only requires access to the output logits of the classifier and is thus
not affected by gradient masking. No additional information is needed,
rendering our method more suitable to real-life situations. We test its
performance with three different state-of-the-art models -- Inception-v3,
ResNet-50, and VGG-16-BN -- against three benchmark datasets: MNIST, CIFAR10
and ImageNet. Furthermore, we evaluate Attackar's performance on
non-differential transformation defenses and state-of-the-art robust models.
Our results demonstrate the superior performance of Attackar, both in terms of
accuracy score and query efficiency.
Related papers
- Hard Label Black Box Node Injection Attack on Graph Neural Networks [7.176182084359572]
We will propose a non-targeted Hard Label Black Box Node Injection Attack on Graph Neural Networks.
Our attack is based on an existing edge perturbation attack, from which we restrict the optimization process to formulate a node injection attack.
In the work, we will evaluate the performance of the attack using three datasets.
arXiv Detail & Related papers (2023-11-22T09:02:04Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Microbial Genetic Algorithm-based Black-box Attack against Interpretable
Deep Learning Systems [16.13790238416691]
In white-box environments, interpretable deep learning systems (IDLSes) have been shown to be vulnerable to malicious manipulations.
We propose a Query-efficient Score-based black-box attack against IDLSes, QuScore, which requires no knowledge of the target model and its coupled interpretation model.
arXiv Detail & Related papers (2023-07-13T00:08:52Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Query Efficient Cross-Dataset Transferable Black-Box Attack on Action
Recognition [99.29804193431823]
Black-box adversarial attacks present a realistic threat to action recognition systems.
We propose a new attack on action recognition that addresses these shortcomings by generating perturbations.
Our method achieves 8% and higher 12% deception rates compared to state-of-the-art query-based and transfer-based attacks.
arXiv Detail & Related papers (2022-11-23T17:47:49Z) - How to Robustify Black-Box ML Models? A Zeroth-Order Optimization
Perspective [74.47093382436823]
We address the problem of black-box defense: How to robustify a black-box model using just input queries and output feedback?
We propose a general notion of defensive operation that can be applied to black-box models, and design it through the lens of denoised smoothing (DS)
We empirically show that ZO-AE-DS can achieve improved accuracy, certified robustness, and query complexity over existing baselines.
arXiv Detail & Related papers (2022-03-27T03:23:32Z) - Art-Attack: Black-Box Adversarial Attack via Evolutionary Art [5.760976250387322]
Deep neural networks (DNNs) have achieved state-of-the-art performance in many tasks but have shown extreme vulnerabilities to attacks generated by adversarial examples.
This paper proposes a gradient-free attack by using a concept of evolutionary art to generate adversarial examples.
arXiv Detail & Related papers (2022-03-07T12:54:09Z) - Detect and Defense Against Adversarial Examples in Deep Learning using
Natural Scene Statistics and Adaptive Denoising [12.378017309516965]
We propose a framework for defending DNN against ad-versarial samples.
The detector aims to detect AEs bycharacterizing them through the use of natural scenestatistic.
The proposed method outperforms the state-of-the-art defense techniques.
arXiv Detail & Related papers (2021-07-12T23:45:44Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.