Backpropagating Linearly Improves Transferability of Adversarial
Examples
- URL: http://arxiv.org/abs/2012.03528v1
- Date: Mon, 7 Dec 2020 08:40:56 GMT
- Title: Backpropagating Linearly Improves Transferability of Adversarial
Examples
- Authors: Yiwen Guo, Qizhang Li, Hao Chen
- Abstract summary: Vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community.
In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs.
We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients.
- Score: 31.808770437120536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of deep neural networks (DNNs) to adversarial examples has
drawn great attention from the community. In this paper, we study the
transferability of such examples, which lays the foundation of many black-box
attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis
of Goodfellow et al.'s and disclose that the transferability can be enhanced by
improving the linearity of DNNs in an appropriate manner. We introduce linear
backpropagation (LinBP), a method that performs backpropagation in a more
linear fashion using off-the-shelf attacks that exploit gradients. More
specifically, it calculates forward as normal but backpropagates loss as if
some nonlinear activations are not encountered in the forward pass.
Experimental results demonstrate that this simple yet effective method
obviously outperforms current state-of-the-arts in crafting transferable
adversarial examples on CIFAR-10 and ImageNet, leading to more effective
attacks on a variety of DNNs.
Related papers
- Forward Learning of Graph Neural Networks [17.79590285482424]
Backpropagation (BP) is the de facto standard for training deep neural networks (NNs)
BP imposes several constraints, which are not only biologically implausible, but also limit the scalability, parallelism, and flexibility in learning NNs.
We propose ForwardGNN, which avoids the constraints imposed by BP via an effective layer-wise local forward training.
arXiv Detail & Related papers (2024-03-16T19:40:35Z) - Towards Evaluating Transfer-based Attacks Systematically, Practically,
and Fairly [79.07074710460012]
adversarial vulnerability of deep neural networks (DNNs) has drawn great attention.
An increasing number of transfer-based methods have been developed to fool black-box DNN models.
We establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods.
arXiv Detail & Related papers (2023-11-02T15:35:58Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Yet Another Intermediate-Level Attack [31.055720988792416]
The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.
We propose a novel method to enhance the black-box transferability of baseline adversarial examples.
arXiv Detail & Related papers (2020-08-20T09:14:04Z) - Network Moments: Extensions and Sparse-Smooth Attacks [59.24080620535988]
We derive exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input.
We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates.
arXiv Detail & Related papers (2020-06-21T11:36:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.