Skip Connections Matter: On the Transferability of Adversarial Examples
Generated with ResNets
- URL: http://arxiv.org/abs/2002.05990v1
- Date: Fri, 14 Feb 2020 12:09:21 GMT
- Title: Skip Connections Matter: On the Transferability of Adversarial Examples
Generated with ResNets
- Authors: Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma
- Abstract summary: Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs)
Use of skip connections allows easier generation of highly transferable adversarial examples.
We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet)
- Score: 83.12737997548645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skip connections are an essential component of current state-of-the-art deep
neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt.
Despite their huge success in building deeper and more powerful DNNs, we
identify a surprising security weakness of skip connections in this paper. Use
of skip connections allows easier generation of highly transferable adversarial
examples. Specifically, in ResNet-like (with skip connections) neural networks,
gradients can backpropagate through either skip connections or residual
modules. We find that using more gradients from the skip connections rather
than the residual modules according to a decay factor, allows one to craft
adversarial examples with high transferability. Our method is termed Skip
Gradient Method(SGM). We conduct comprehensive transfer attacks against
state-of-the-art DNNs including ResNets, DenseNets, Inceptions,
Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained
DNNs. We show that employing SGM on the gradient flow can greatly improve the
transferability of crafted attacks in almost all cases. Furthermore, SGM can be
easily combined with existing black-box attack techniques, and obtain high
improvements over state-of-the-art transferability methods. Our findings not
only motivate new research into the architectural vulnerability of DNNs, but
also open up further challenges for the design of secure DNN architectures.
Related papers
- A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel Attacks [0.7499722271664147]
Protection of the intellectual property of Deep Neural Networks (DNNs) is still an issue and an emerging research field.
Recent works have successfully extracted fully-connected DNNs using cryptanalytic methods in hard-label settings.
We introduce a new end-to-end attack framework designed for model extraction of embedded DNNs with high fidelity.
arXiv Detail & Related papers (2024-11-15T13:19:59Z) - On the Adversarial Transferability of Generalized "Skip Connections" [83.71752155227888]
Skip connection is an essential ingredient for modern deep models to be deeper and more powerful.
We find that using more gradients from the skip connections rather than the residual modules during backpropagation allows one to craft adversarial examples with high transferability.
We conduct comprehensive transfer attacks against various models including ResNets, Transformers, Inceptions, Neural Architecture Search, and Large Language Models.
arXiv Detail & Related papers (2024-10-11T16:17:47Z) - Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Deep Learning without Shortcuts: Shaping the Kernel with Tailored
Rectifiers [83.74380713308605]
We develop a new type of transformation that is fully compatible with a variant of ReLUs -- Leaky ReLUs.
We show in experiments that our method, which introduces negligible extra computational cost, validation accuracies with deep vanilla networks that are competitive with ResNets.
arXiv Detail & Related papers (2022-03-15T17:49:08Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Deep Residual Learning in Spiking Neural Networks [36.16846259899793]
Spiking Neural Networks (SNNs) present optimization difficulties for gradient-based approaches.
Considering the huge success of ResNet in deep learning, it would be natural to train deep SNNs with residual learning.
We propose spike-element-wise (SEW) ResNet to realize residual learning in deep SNNs.
arXiv Detail & Related papers (2021-02-08T12:22:33Z) - An Integrated Approach to Produce Robust Models with High Efficiency [9.476463361600828]
Quantization and structure simplification are promising ways to adapt Deep Neural Networks (DNNs) to mobile devices.
In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble.
We design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.
arXiv Detail & Related papers (2020-08-31T00:44:59Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.