Improving Robustness of Adversarial Attacks Using an Affine-Invariant
Gradient Estimator
- URL: http://arxiv.org/abs/2109.05820v1
- Date: Mon, 13 Sep 2021 09:43:17 GMT
- Title: Improving Robustness of Adversarial Attacks Using an Affine-Invariant
Gradient Estimator
- Authors: Wenzhao Xiang, Hang Su, Chang Liu, Yandong Guo, Shibao Zheng
- Abstract summary: Adversarial examples can deceive a deep neural network (DNN) by significantly altering its response with imperceptible perturbations.
Most of the existing adversarial examples cannot maintain the malicious functionality if we apply an affine transformation on the resultant examples.
We propose an affine-invariant adversarial attack which can consistently construct adversarial examples robust over a distribution of affine transformation.
- Score: 15.863109283735625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples can deceive a deep neural network (DNN) by significantly
altering its response with imperceptible perturbations, which poses new
potential vulnerabilities as the growing ubiquity of DNNs. However, most of the
existing adversarial examples cannot maintain the malicious functionality if we
apply an affine transformation on the resultant examples, which is an important
measurement to the robustness of adversarial attacks for the practical risks.
To address this issue, we propose an affine-invariant adversarial attack which
can consistently construct adversarial examples robust over a distribution of
affine transformation. To further improve the efficiency, we propose to
disentangle the affine transformation into rotations, translations,
magnifications, and reformulate the transformation in polar space. Afterwards,
we construct an affine-invariant gradient estimator by convolving the gradient
at the original image with derived kernels, which can be integrated with any
gradient-based attack methods. Extensive experiments on the ImageNet
demonstrate that our method can consistently produce more robust adversarial
examples under significant affine transformations, and as a byproduct, improve
the transferability of adversarial examples compared with the alternative
state-of-the-art methods.
Related papers
- Improving the Transferability of Adversarial Examples by Feature Augmentation [6.600860987969305]
We propose a simple but effective feature augmentation attack (FAUG) method, which improves adversarial transferability without introducing extra computation costs.
Specifically, we inject the random noise into the intermediate features of the model to enlarge the diversity of the attack gradient.
Our method can be combined with existing gradient attacks to augment their performance further.
arXiv Detail & Related papers (2024-07-09T09:41:40Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Boosting Adversarial Transferability by Achieving Flat Local Maxima [23.91315978193527]
Recently, various adversarial attacks have emerged to boost adversarial transferability from different perspectives.
In this work, we assume and empirically validate that adversarial examples at a flat local region tend to have good transferability.
We propose an approximation optimization method to simplify the gradient update of the objective function.
arXiv Detail & Related papers (2023-06-08T14:21:02Z) - Fuzziness-tuned: Improving the Transferability of Adversarial Examples [18.880398046794138]
adversarial examples have been widely used to enhance the robustness of the training models on deep neural networks.
The attack success rate of the transfer-based attacks on the surrogate model is much higher than that on victim model under the low attack strength.
A fuzziness-tuned method is proposed to ensure the generated adversarial examples can effectively skip out of the fuzzy domain.
arXiv Detail & Related papers (2023-03-17T16:00:18Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - Improving Adversarial Transferability with Scheduled Step Size and Dual
Example [33.00528131208799]
We show that transferability of adversarial examples generated by the iterative fast gradient sign method exhibits a decreasing trend when increasing the number of iterations.
We propose a novel strategy, which uses the Scheduled step size and the Dual example (SD) to fully utilize the adversarial information near the benign sample.
Our proposed strategy can be easily integrated with existing adversarial attack methods for better adversarial transferability.
arXiv Detail & Related papers (2023-01-30T15:13:46Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Comment on Transferability and Input Transformation with Additive Noise [6.168976174718275]
We analyze the relationship between transferability and input transformation with additive noise.
By adding small perturbations to a benign example, adversarial attacks successfully generate adversarial examples that lead misclassification of deep learning models.
arXiv Detail & Related papers (2022-06-18T00:52:27Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Boosting Adversarial Transferability through Enhanced Momentum [50.248076722464184]
Deep learning models are vulnerable to adversarial examples crafted by adding human-imperceptible perturbations on benign images.
Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability.
We propose an enhanced momentum iterative gradient-based method to further enhance the adversarial transferability.
arXiv Detail & Related papers (2021-03-19T03:10:32Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.