Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks
- URL: http://arxiv.org/abs/2110.03825v1
- Date: Thu, 7 Oct 2021 23:13:33 GMT
- Title: Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks
- Authors: Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James
Bailey, Xingjun Ma
- Abstract summary: Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
- Score: 98.21130211336964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) are known to be vulnerable to adversarial
attacks. A range of defense methods have been proposed to train adversarially
robust DNNs, among which adversarial training has demonstrated promising
results. However, despite preliminary understandings developed for adversarial
training, it is still not clear, from the architectural perspective, what
configurations can lead to more robust DNNs. In this paper, we address this gap
via a comprehensive investigation on the impact of network width and depth on
the robustness of adversarially trained DNNs. Specifically, we make the
following key observations: 1) more parameters (higher model capacity) does not
necessarily help adversarial robustness; 2) reducing capacity at the last stage
(the last group of blocks) of the network can actually improve adversarial
robustness; and 3) under the same parameter budget, there exists an optimal
architectural configuration for adversarial robustness. We also provide a
theoretical analysis explaning why such network configuration can help
robustness. These architectural insights can help design adversarially robust
DNNs. Code is available at \url{https://github.com/HanxunH/RobustWRN}.
Related papers
- A Theoretical Perspective on Subnetwork Contributions to Adversarial
Robustness [2.064612766965483]
This paper investigates how the adversarial robustness of a subnetwork contributes to the robustness of the entire network.
Experiments show the ability of a robust subnetwork to promote full-network robustness, and investigate the layer-wise dependencies required for this full-network robustness to be achieved.
arXiv Detail & Related papers (2023-07-07T19:16:59Z) - Adversarially Robust Neural Architecture Search for Graph Neural
Networks [45.548352741415556]
Graph Neural Networks (GNNs) are prone to adversarial attacks, which are massive threats to applying GNNs to risk-sensitive domains.
Existing defensive methods neither guarantee performance facing new data/tasks or adversarial attacks nor provide insights to understand GNN robustness from an architectural perspective.
We propose a novel Robust Neural Architecture search framework for GNNs (G-RNA)
We show that G-RNA significantly outperforms manually designed robust GNNs and vanilla graph NAS baselines by 12.1% to 23.4% under adversarial attacks.
arXiv Detail & Related papers (2023-04-09T06:00:50Z) - Revisiting Residual Networks for Adversarial Robustness: An
Architectural Perspective [22.59262601575886]
We focus on residual networks and consider architecture design at the block level, i.e., topology, kernel size, activation, and normalization.
We present a portfolio of adversarially robust residual networks, RobustResNets, spanning a broad spectrum of model capacities.
arXiv Detail & Related papers (2022-12-21T13:19:25Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - Securing Deep Spiking Neural Networks against Adversarial Attacks
through Inherent Structural Parameters [11.665517294899724]
This paper explores the security enhancement of Spiking Neural Networks (SNNs) through internal structural parameters.
To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks.
arXiv Detail & Related papers (2020-12-09T21:09:03Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.