Adversarial examples attack based on random warm restart mechanism and
improved Nesterov momentum
- URL: http://arxiv.org/abs/2105.05029v1
- Date: Mon, 10 May 2021 07:24:25 GMT
- Title: Adversarial examples attack based on random warm restart mechanism and
improved Nesterov momentum
- Authors: Tiangang Li
- Abstract summary: Some studies have pointed out that the deep learning model is vulnerable to attacks adversarial examples and makes false decisions.
We propose RWR-NM-PGD attack algorithm based on random warm restart mechanism and improved Nesterov momentum.
Our method has average attack success rate of 46.3077%, which is 27.19% higher than I-FGSM and 9.27% higher than PGD.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deep learning algorithm has achieved great success in the field of
computer vision, but some studies have pointed out that the deep learning model
is vulnerable to attacks adversarial examples and makes false decisions. This
challenges the further development of deep learning, and urges researchers to
pay more attention to the relationship between adversarial examples attacks and
deep learning security. This work focuses on adversarial examples, optimizes
the generation of adversarial examples from the view of adversarial robustness,
takes the perturbations added in adversarial examples as the optimization
parameter. We propose RWR-NM-PGD attack algorithm based on random warm restart
mechanism and improved Nesterov momentum from the view of gradient
optimization. The algorithm introduces improved Nesterov momentum, using its
characteristics of accelerating convergence and improving gradient update
direction in optimization algorithm to accelerate the generation of adversarial
examples. In addition, the random warm restart mechanism is used for
optimization, and the projected gradient descent algorithm is used to limit the
range of the generated perturbations in each warm restart, which can obtain
better attack effect. Experiments on two public datasets show that the
algorithm proposed in this work can improve the success rate of attacking deep
learning models without extra time cost. Compared with the benchmark attack
method, the algorithm proposed in this work can achieve better attack success
rate for both normal training model and defense model. Our method has average
attack success rate of 46.3077%, which is 27.19% higher than I-FGSM and 9.27%
higher than PGD. The attack results in 13 defense models show that the attack
algorithm proposed in this work is superior to the benchmark algorithm in
attack universality and transferability.
Related papers
- MARS: Unleashing the Power of Variance Reduction for Training Large Models [56.47014540413659]
Large gradient algorithms like Adam, Adam, and their variants have been central to the development of this type of training.
We propose a framework that reconciles preconditioned gradient optimization methods with variance reduction via a scaled momentum technique.
arXiv Detail & Related papers (2024-11-15T18:57:39Z) - GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Enhancing Adversarial Attacks: The Similar Target Method [6.293148047652131]
adversarial examples pose a threat to deep neural networks' applications.
Deep neural networks are vulnerable to adversarial examples, posing a threat to the models' applications and raising security concerns.
We propose a similar targeted attack method named Similar Target(ST)
arXiv Detail & Related papers (2023-08-21T14:16:36Z) - Enhancing Adversarial Robustness via Score-Based Optimization [22.87882885963586]
Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations.
We introduce a novel adversarial defense scheme named ScoreOpt, which optimize adversarial samples at test-time.
Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both performance and robustness speed.
arXiv Detail & Related papers (2023-07-10T03:59:42Z) - A Multi-objective Memetic Algorithm for Auto Adversarial Attack
Optimization Design [1.9100854225243937]
Well-designed adversarial defense strategies can improve the robustness of deep learning models against adversarial examples.
Given the defensed model, the efficient adversarial attack with less computational burden and lower robust accuracy is needed to be further exploited.
We propose a multi-objective memetic algorithm for auto adversarial attack optimization design, which realizes the automatical search for the near-optimal adversarial attack towards defensed models.
arXiv Detail & Related papers (2022-08-15T03:03:05Z) - Hessian-Free Second-Order Adversarial Examples for Adversarial Learning [6.835470949075655]
Adversarial learning with elaborately designed adversarial examples is one of the most effective methods to defend against such an attack.
Most existing adversarial examples generation methods are based on first-order gradients, which can hardly further improve models' robustness.
We propose an approximation method through transforming the problem into an optimization in the Krylov subspace, which remarkably reduce the computational complexity to speed up the training procedure.
arXiv Detail & Related papers (2022-07-04T13:29:27Z) - Revisiting and Advancing Fast Adversarial Training Through The Lens of
Bi-Level Optimization [60.72410937614299]
We propose a new tractable bi-level optimization problem, design and analyze a new set of algorithms termed Bi-level AT (FAST-BAT)
FAST-BAT is capable of defending sign-based projected descent (PGD) attacks without calling any gradient sign method and explicit robust regularization.
arXiv Detail & Related papers (2021-12-23T06:25:36Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.