Composite Adversarial Attacks
- URL: http://arxiv.org/abs/2012.05434v1
- Date: Thu, 10 Dec 2020 03:21:16 GMT
- Title: Composite Adversarial Attacks
- Authors: Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue
- Abstract summary: Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
- Score: 57.293211764569996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attack is a technique for deceiving Machine Learning (ML) models,
which provides a way to evaluate the adversarial robustness. In practice,
attack algorithms are artificially selected and tuned by human experts to break
a ML system. However, manual selection of attackers tends to be sub-optimal,
leading to a mistakenly assessment of model security. In this paper, a new
procedure called Composite Adversarial Attack (CAA) is proposed for
automatically searching the best combination of attack algorithms and their
hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design
a search space where attack policy is represented as an attacking sequence,
i.e., the output of the previous attacker is used as the initialization input
for successors. Multi-objective NSGA-II genetic algorithm is adopted for
finding the strongest attack policy with minimum complexity. The experimental
result shows CAA beats 10 top attackers on 11 diverse defenses with less
elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new
state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.
Related papers
- DeltaBound Attack: Efficient decision-based attack in low queries regime [0.4061135251278187]
Deep neural networks and other machine learning systems are vulnerable to adversarial attacks.
We propose a novel, powerful attack in the hard-label setting with $ell$ norm bounded perturbations.
We find that the DeltaBound attack performs as well and sometimes better than current state-of-the-art attacks.
arXiv Detail & Related papers (2022-10-01T14:45:18Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack [96.50202709922698]
A practical evaluation method should be convenient (i.e., parameter-free), efficient (i.e., fewer iterations) and reliable.
We propose a parameter-free Adaptive Auto Attack (A$3$) evaluation method which addresses the efficiency and reliability in a test-time-training fashion.
arXiv Detail & Related papers (2022-03-10T04:53:54Z) - Generative Dynamic Patch Attack [6.1863763890100065]
We propose an end-to-end patch attack algorithm, Generative Dynamic Patch Attack (GDPA)
GDPA generates both patch pattern and patch location adversarially for each input image.
Experiments on VGGFace, Traffic Sign and ImageNet show that GDPA achieves higher attack success rates than state-of-the-art patch attacks.
arXiv Detail & Related papers (2021-11-08T04:15:34Z) - Adversarial Attacks on Gaussian Process Bandits [47.84198626686564]
We propose various adversarial attack methods with differing assumptions on the attacker's strength and prior information.
Our goal is to understand adversarial attacks on GP bandits from both a theoretical and practical perspective.
We demonstrate that adversarial attacks on GP bandits can succeed in forcing the algorithm towards $mathcalR_rm target$ even with a low attack budget.
arXiv Detail & Related papers (2021-10-16T02:39:10Z) - PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack [92.94132883915876]
State-of-the-art deep neural networks are sensitive to small input perturbations.
Many defence methods have been proposed that attempt to improve robustness to adversarial noise.
evaluating adversarial robustness has proven to be extremely challenging.
arXiv Detail & Related papers (2021-06-03T01:45:48Z) - Action-Manipulation Attacks Against Stochastic Bandits: Attacks and
Defense [45.408568528354216]
We introduce a new class of attack named action-manipulation attack.
In this attack, an adversary can change the action signal selected by the user.
To defend against this class of attacks, we introduce a novel algorithm that is robust to action-manipulation attacks.
arXiv Detail & Related papers (2020-02-19T04:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.