Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers
- URL: http://arxiv.org/abs/2202.12435v2
- Date: Mon, 29 Jan 2024 12:54:30 GMT
- Title: Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers
- Authors: Cong Xu, Wei Zhang, Jun Wang and Min Yang
- Abstract summary: Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
- Score: 23.42376264664302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adversarial robustness of a neural network mainly relies on two factors:
model capacity and anti-perturbation ability. In this paper, we study the
anti-perturbation ability of the network from the feature maps of convolutional
layers. Our theoretical analysis discovers that larger convolutional feature
maps before average pooling can contribute to better resistance to
perturbations, but the conclusion is not true for max pooling. It brings new
inspiration to the design of robust neural networks and urges us to apply these
findings to improve existing architectures. The proposed modifications are very
simple and only require upsampling the inputs or slightly modifying the stride
configurations of downsampling operators. We verify our approaches on several
benchmark neural network architectures, including AlexNet, VGG, RestNet18, and
PreActResNet18. Non-trivial improvements in terms of both natural accuracy and
adversarial robustness can be achieved under various attack and defense
mechanisms. The code is available at \url{https://github.com/MTandHJ/rcm}.
Related papers
- Wavelets Beat Monkeys at Adversarial Robustness [0.8702432681310401]
We show how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
Our work shows how physically inspired structures yield new insights into robustness that were previously only thought possible by meticulously mimicking the human cortex.
arXiv Detail & Related papers (2023-04-19T03:41:30Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Predify: Augmenting deep neural networks with brain-inspired predictive
coding dynamics [0.5284812806199193]
We take inspiration from a popular framework in neuroscience: 'predictive coding'
We show that implementing this strategy into two popular networks, VGG16 and EfficientNetB0, improves their robustness against various corruptions.
arXiv Detail & Related papers (2021-06-04T22:48:13Z) - On the Adversarial Robustness of Quantized Neural Networks [2.0625936401496237]
It is unclear how model compression techniques may affect the robustness of AI algorithms against adversarial attacks.
This paper explores the effect of quantization, one of the most common compression techniques, on the adversarial robustness of neural networks.
arXiv Detail & Related papers (2021-05-01T11:46:35Z) - Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search [35.362883630015354]
Most neural network models deployed in mobile devices are tiny. However, tiny neural networks are commonly very vulnerable to attacks.
Our work focuses on how to improve the robustness of tiny neural networks without seriously deteriorating of clean accuracy under mobile-level resources.
arXiv Detail & Related papers (2021-02-28T00:54:09Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.