Automated Discovery of Adaptive Attacks on Adversarial Defenses
- URL: http://arxiv.org/abs/2102.11860v1
- Date: Tue, 23 Feb 2021 18:43:24 GMT
- Title: Automated Discovery of Adaptive Attacks on Adversarial Defenses
- Authors: Chengyuan Yao, Pavol Bielik, Petar Tsankov, Martin Vechev
- Abstract summary: We present a framework that automatically discovers an effective attack on a given model with an unknown defense.
We show it outperforms AutoAttack, the current state-of-the-art tool for reliable evaluation of adversarial defenses.
- Score: 14.633898825111826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable evaluation of adversarial defenses is a challenging task, currently
limited to an expert who manually crafts attacks that exploit the defense's
inner workings, or to approaches based on ensemble of fixed attacks, none of
which may be effective for the specific defense at hand. Our key observation is
that custom attacks are composed from a set of reusable building blocks, such
as fine-tuning relevant attack parameters, network transformations, and custom
loss functions. Based on this observation, we present an extensible framework
that defines a search space over these reusable building blocks and
automatically discovers an effective attack on a given model with an unknown
defense by searching over suitable combinations of these blocks. We evaluated
our framework on 23 adversarial defenses and showed it outperforms AutoAttack,
the current state-of-the-art tool for reliable evaluation of adversarial
defenses: our discovered attacks are either stronger, producing 3.0%-50.8%
additional adversarial examples (10 cases), or are typically 2x faster while
enjoying similar adversarial robustness (13 cases).
Related papers
- Position: Towards Resilience Against Adversarial Examples [42.09231029292568]
We provide a definition of adversarial resilience and outline considerations of designing an adversarially resilient defense.
We then introduce a subproblem of adversarial resilience which we call continual adaptive robustness.
We demonstrate the connection between continual adaptive robustness and previously studied problems of multiattack robustness and unforeseen attack robustness.
arXiv Detail & Related papers (2024-05-02T14:58:44Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Closing the Gap: Achieving Better Accuracy-Robustness Tradeoffs against Query-Based Attacks [1.54994260281059]
We show how to efficiently establish, at test-time, a solid tradeoff between robustness and accuracy when mitigating query-based attacks.
Our approach is independent of training and supported by theory.
arXiv Detail & Related papers (2023-12-15T17:02:19Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Game Theoretic Mixed Experts for Combinational Adversarial Machine
Learning [10.368343314144553]
We provide a game-theoretic framework for ensemble adversarial attacks and defenses.
We propose three new attack algorithms, specifically designed to target defenses with randomized transformations, multi-model voting schemes, and adversarial detector architectures.
arXiv Detail & Related papers (2022-11-26T21:35:01Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adversarial Attack and Defense in Deep Ranking [100.17641539999055]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks.
Our adversarial ranking attacks and defenses are evaluated on MNIST, Fashion-MNIST, CUB200-2011, CARS196 and Stanford Online Products datasets.
arXiv Detail & Related papers (2021-06-07T13:41:45Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.