Skip Connections Matter: On the Transferability of Adversarial Examples
Generated with ResNets
- URL: http://arxiv.org/abs/2002.05990v1
- Date: Fri, 14 Feb 2020 12:09:21 GMT
- Title: Skip Connections Matter: On the Transferability of Adversarial Examples
Generated with ResNets
- Authors: Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
- Abstract summary: Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs)
Use of skip connections allows easier generation of highly transferable adversarial examples.
We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet)
- Score: 83.12737997548645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skip connections are an essential component of current state-of-the-art deep
neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt.
Despite their huge success in building deeper and more powerful DNNs, we
identify a surprising security weakness of skip connections in this paper. Use
of skip connections allows easier generation of highly transferable adversarial
examples. Specifically, in ResNet-like (with skip connections) neural networks,
gradients can backpropagate through either skip connections or residual
modules. We find that using more gradients from the skip connections rather
than the residual modules according to a decay factor, allows one to craft
adversarial examples with high transferability. Our method is termed Skip
Gradient Method(SGM). We conduct comprehensive transfer attacks against
state-of-the-art DNNs including ResNets, DenseNets, Inceptions,
Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained
DNNs. We show that employing SGM on the gradient flow can greatly improve the
transferability of crafted attacks in almost all cases. Furthermore, SGM can be
easily combined with existing black-box attack techniques, and obtain high
improvements over state-of-the-art transferability methods. Our findings not
only motivate new research into the architectural vulnerability of DNNs, but
also open up further challenges for the design of secure DNN architectures.
Related papers
- Rethinking Residual Connection in Training Large-Scale Spiking Neural
Networks [10.286425749417216]
Spiking Neural Network (SNN) is known as the most famous brain-inspired model.
Non-differentiable spiking mechanism makes it hard to train large-scale SNNs.
arXiv Detail & Related papers (2023-11-09T06:48:29Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Deep Learning without Shortcuts: Shaping the Kernel with Tailored
Rectifiers [83.74380713308605]
We develop a new type of transformation that is fully compatible with a variant of ReLUs -- Leaky ReLUs.
We show in experiments that our method, which introduces negligible extra computational cost, validation accuracies with deep vanilla networks that are competitive with ResNets.
arXiv Detail & Related papers (2022-03-15T17:49:08Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Deep Residual Learning in Spiking Neural Networks [36.16846259899793]
Spiking Neural Networks (SNNs) present optimization difficulties for gradient-based approaches.
Considering the huge success of ResNet in deep learning, it would be natural to train deep SNNs with residual learning.
We propose spike-element-wise (SEW) ResNet to realize residual learning in deep SNNs.
arXiv Detail & Related papers (2021-02-08T12:22:33Z) - Skip-Connected Self-Recurrent Spiking Neural Networks with Joint
Intrinsic Parameter and Synaptic Weight Training [14.992756670960008]
We propose a new type of RSNN called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs)
ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.
arXiv Detail & Related papers (2020-10-23T22:27:13Z) - An Integrated Approach to Produce Robust Models with High Efficiency [9.476463361600828]
Quantization and structure simplification are promising ways to adapt Deep Neural Networks (DNNs) to mobile devices.
In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble.
We design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.
arXiv Detail & Related papers (2020-08-31T00:44:59Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.