TEAM: An Taylor Expansion-Based Method for Generating Adversarial
Examples
- URL: http://arxiv.org/abs/2001.08389v2
- Date: Wed, 25 Mar 2020 15:08:20 GMT
- Title: TEAM: An Taylor Expansion-Based Method for Generating Adversarial
Examples
- Authors: Ya-guan Qian, Xi-Ming Zhang, Wassim Swaileh, Li Wei, Bin Wang,
Jian-Hai Chen, Wu-Jie Zhou, and Jing-Sheng Lei
- Abstract summary: Deep Neural(DNNs) have achieved successful applications in many fields.
Adversarial training is one of the most effective methods to improve the robustness of.
DNNs can be effectively regularized and the defects of the model can be improved.
- Score: 20.589548370628535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although Deep Neural Networks(DNNs) have achieved successful applications in
many fields, they are vulnerable to adversarial examples.Adversarial training
is one of the most effective methods to improve the robustness of DNNs, and it
is generally considered as solving a saddle point problem that minimizes risk
and maximizes perturbation.Therefore, powerful adversarial examples can
effectively replicate the situation of perturbation maximization to solve the
saddle point problem.The method proposed in this paper approximates the output
of DNNs in the input neighborhood by using the Taylor expansion, and then
optimizes it by using the Lagrange multiplier method to generate adversarial
examples. If it is used for adversarial training, the DNNs can be effectively
regularized and the defects of the model can be improved.
Related papers
- QUCE: The Minimisation and Quantification of Path-Based Uncertainty for Generative Counterfactual Explanations [1.649938899766112]
Quantified Uncertainty Counterfactual Explanations (QUCE) is a method designed to minimize path uncertainty.
We show that QUCE quantifies uncertainty when presenting explanations and generates more certain counterfactual examples.
We showcase the performance of the QUCE method by comparing it with competing methods for both path-based explanations and generative counterfactual examples.
arXiv Detail & Related papers (2024-02-27T14:00:08Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - Hessian-Free Second-Order Adversarial Examples for Adversarial Learning [6.835470949075655]
Adversarial learning with elaborately designed adversarial examples is one of the most effective methods to defend against such an attack.
Most existing adversarial examples generation methods are based on first-order gradients, which can hardly further improve models' robustness.
We propose an approximation method through transforming the problem into an optimization in the Krylov subspace, which remarkably reduce the computational complexity to speed up the training procedure.
arXiv Detail & Related papers (2022-07-04T13:29:27Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - On the Convergence and Robustness of Adversarial Training [134.25999006326916]
Adrial training with Project Gradient Decent (PGD) is amongst the most effective.
We propose a textitdynamic training strategy to increase the convergence quality of the generated adversarial examples.
Our theoretical and empirical results show the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-12-15T17:54:08Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - TEAM: We Need More Powerful Adversarial Examples for DNNs [6.7943676146532885]
Adversarial examples can lead to misclassification of deep neural networks (DNNs)
We propose a novel method to generate more powerful adversarial examples than previous methods.
Our method can reliably produce adversarial examples with 100% attack success rate (ASR) while only by smaller perturbations.
arXiv Detail & Related papers (2020-07-31T04:11:02Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.