Generalizing Adversarial Examples by AdaBelief Optimizer
- URL: http://arxiv.org/abs/2101.09930v1
- Date: Mon, 25 Jan 2021 07:39:16 GMT
- Title: Generalizing Adversarial Examples by AdaBelief Optimizer
- Authors: Yixiang Wang, Jiqiang Liu, Xiaolin Chang
- Abstract summary: We propose an AdaBelief iterative Fast Gradient Sign Method to generalize adversarial examples.
Compared with state-of-the-art attack methods, our proposed method can generate adversarial examples effectively in the white-box setting.
The transfer rate is 7%-21% higher than latest attack methods.
- Score: 6.243028964381449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has proved that deep neural networks (DNNs) are vulnerable to
adversarial examples, the legitimate input added with imperceptible and
well-designed perturbations can fool DNNs easily in the testing stage. However,
most of the existing adversarial attacks are difficult to fool adversarially
trained models. To solve this issue, we propose an AdaBelief iterative Fast
Gradient Sign Method (AB-FGSM) to generalize adversarial examples. By
integrating AdaBelief optimization algorithm to I-FGSM, we believe that the
generalization of adversarial examples will be improved, relying on the strong
generalization of AdaBelief optimizer. To validate the effectiveness and
transferability of adversarial examples generated by our proposed AB-FGSM, we
conduct the white-box and black-box attacks on various single models and
ensemble models. Compared with state-of-the-art attack methods, our proposed
method can generate adversarial examples effectively in the white-box setting,
and the transfer rate is 7%-21% higher than latest attack methods.
Related papers
- Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - Hessian-Free Second-Order Adversarial Examples for Adversarial Learning [6.835470949075655]
Adversarial learning with elaborately designed adversarial examples is one of the most effective methods to defend against such an attack.
Most existing adversarial examples generation methods are based on first-order gradients, which can hardly further improve models' robustness.
We propose an approximation method through transforming the problem into an optimization in the Krylov subspace, which remarkably reduce the computational complexity to speed up the training procedure.
arXiv Detail & Related papers (2022-07-04T13:29:27Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Towards Defending against Adversarial Examples via Attack-Invariant
Features [147.85346057241605]
Deep neural networks (DNNs) are vulnerable to adversarial noise.
adversarial robustness can be improved by exploiting adversarial examples.
Models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
arXiv Detail & Related papers (2021-06-09T12:49:54Z) - Direction-Aggregated Attack for Transferable Adversarial Examples [10.208465711975242]
A deep neural network is vulnerable to adversarial examples crafted by imposing imperceptible changes to the inputs.
adversarial examples are most successful in white-box settings where the model and its parameters are available.
We propose the Direction-Aggregated adversarial attacks that deliver transferable adversarial examples.
arXiv Detail & Related papers (2021-04-19T09:54:56Z) - Adversarial example generation with AdaBelief Optimizer and Crop
Invariance [8.404340557720436]
Adversarial attacks can be an important method to evaluate and select robust models in safety-critical applications.
We propose AdaBelief Iterative Fast Gradient Method (ABI-FGM) and Crop-Invariant attack Method (CIM) to improve the transferability of adversarial examples.
Our method has higher success rates than state-of-the-art gradient-based attack methods.
arXiv Detail & Related papers (2021-02-07T06:00:36Z) - Yet Another Intermediate-Level Attack [31.055720988792416]
The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.
We propose a novel method to enhance the black-box transferability of baseline adversarial examples.
arXiv Detail & Related papers (2020-08-20T09:14:04Z) - Adversarial Example Games [51.92698856933169]
Adrial Example Games (AEG) is a framework that models the crafting of adversarial examples.
AEG provides a new way to design adversarial examples by adversarially training a generator and aversa from a given hypothesis class.
We demonstrate the efficacy of AEG on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2020-07-01T19:47:23Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.