Neural Architecture Dilation for Adversarial Robustness
- URL: http://arxiv.org/abs/2108.06885v1
- Date: Mon, 16 Aug 2021 03:58:00 GMT
- Title: Neural Architecture Dilation for Adversarial Robustness
- Authors: Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
- Abstract summary: A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
- Score: 56.18555072877193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the tremendous advances in the architecture and scale of convolutional
neural networks (CNNs) over the past few decades, they can easily reach or even
exceed the performance of humans in certain tasks. However, a recently
discovered shortcoming of CNNs is that they are vulnerable to adversarial
attacks. Although the adversarial robustness of CNNs can be improved by
adversarial training, there is a trade-off between standard accuracy and
adversarial robustness. From the neural architecture perspective, this paper
aims to improve the adversarial robustness of the backbone CNNs that have a
satisfactory accuracy. Under a minimal computational overhead, the introduction
of a dilation architecture is expected to be friendly with the standard
performance of the backbone CNN while pursuing adversarial robustness.
Theoretical analyses on the standard and adversarial error bounds naturally
motivate the proposed neural architecture dilation algorithm. Experimental
results on real-world datasets and benchmark neural networks demonstrate the
effectiveness of the proposed algorithm to balance the accuracy and adversarial
robustness.
Related papers
- Impact of White-Box Adversarial Attacks on Convolutional Neural Networks [0.6138671548064356]
We investigate the susceptibility of Convolutional Neural Networks (CNNs) to white-box adversarial attacks.
Our study provides insights into the robustness of CNNs against adversarial threats.
arXiv Detail & Related papers (2024-10-02T21:24:08Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Differentiable Search of Accurate and Robust Architectures [22.435774101990752]
adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
arXiv Detail & Related papers (2022-12-28T08:36:36Z) - Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers [23.42376264664302]
Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
arXiv Detail & Related papers (2022-02-25T00:14:59Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - AdvRush: Searching for Adversarially Robust Neural Architectures [17.86463546971522]
We propose AdvRush, a novel adversarial robustness-aware neural architecture search algorithm.
Through a regularizer that favors a candidate architecture with a smoother input loss landscape, AdvRush successfully discovers an adversarially robust neural architecture.
arXiv Detail & Related papers (2021-08-03T04:27:33Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Extreme Value Preserving Networks [65.2037926048262]
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures.
This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness.
arXiv Detail & Related papers (2020-11-17T02:06:52Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.