Adversarial Attack via Dual-Stage Network Erosion
- URL: http://arxiv.org/abs/2201.00097v1
- Date: Sat, 1 Jan 2022 02:38:09 GMT
- Title: Adversarial Attack via Dual-Stage Network Erosion
- Authors: Yexin Duan, Junhua Zou, Xingyu Zhou, Wu Zhang, Jin Zhang, Zhisong Pan
- Abstract summary: Deep neural networks are vulnerable to adversarial examples, which can fool deep models by adding subtle perturbations.
This paper proposes to improve the transferability of adversarial examples, and applies dual-stage feature-level perturbations to an existing model to implicitly create a set of diverse models.
We conduct comprehensive experiments both on non-residual and residual networks, and obtain more transferable adversarial examples with the computational cost similar to the state-of-the-art method.
- Score: 7.28871533402894
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks are vulnerable to adversarial examples, which can fool
deep models by adding subtle perturbations. Although existing attacks have
achieved promising results, it still leaves a long way to go for generating
transferable adversarial examples under the black-box setting. To this end,
this paper proposes to improve the transferability of adversarial examples, and
applies dual-stage feature-level perturbations to an existing model to
implicitly create a set of diverse models. Then these models are fused by the
longitudinal ensemble during the iterations. The proposed method is termed
Dual-Stage Network Erosion (DSNE). We conduct comprehensive experiments both on
non-residual and residual networks, and obtain more transferable adversarial
examples with the computational cost similar to the state-of-the-art method. In
particular, for the residual networks, the transferability of the adversarial
examples can be significantly improved by biasing the residual block
information to the skip connections. Our work provides new insights into the
architectural vulnerability of neural networks and presents new challenges to
the robustness of neural networks.
Related papers
- A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural
Networks [15.55615069378845]
Recent work has claimed that adversarial pruning methods can produce sparse networks while also preserving robustness to adversarial examples.
In this work, we first re-evaluate three state-of-the-art adversarial pruning methods, showing that their robustness was indeed overestimated.
We conclude by discussing how this intuition may lead to designing more effective adversarial pruning methods in future work.
arXiv Detail & Related papers (2023-10-12T06:50:43Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Detecting Adversarial Examples by Input Transformations, Defense
Perturbations, and Voting [71.57324258813674]
convolutional neural networks (CNNs) have proved to reach super-human performance in visual recognition tasks.
CNNs can easily be fooled by adversarial examples, i.e., maliciously-crafted images that force the networks to predict an incorrect output.
This paper extensively explores the detection of adversarial examples via image transformations and proposes a novel methodology.
arXiv Detail & Related papers (2021-01-27T14:50:41Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Dynamically Computing Adversarial Perturbations for Recurrent Neural
Networks [33.61168219427157]
Convolutional and recurrent neural networks have been widely employed to achieve state-of-the-art performance on classification tasks.
It has also been noted that these networks can be manipulated adversarially with relative ease, by carefully crafted additive perturbations to the input.
We provide theoretical guarantees on the existence of adversarial examples and robustness margins of the network to such examples.
arXiv Detail & Related papers (2020-09-07T03:37:03Z) - Improving Adversarial Robustness by Enforcing Local and Global
Compactness [19.8818435601131]
Adversary training is the most successful method that consistently resists a wide range of attacks.
We propose the Adversary Divergence Reduction Network which enforces local/global compactness and the clustering assumption.
The experimental results demonstrate that augmenting adversarial training with our proposed components can further improve the robustness of the network.
arXiv Detail & Related papers (2020-07-10T00:43:06Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.