RobArch: Designing Robust Architectures against Adversarial Attacks
- URL: http://arxiv.org/abs/2301.03110v1
- Date: Sun, 8 Jan 2023 21:19:52 GMT
- Title: RobArch: Designing Robust Architectures against Adversarial Attacks
- Authors: ShengYun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen
Horng Chau and Jason Martin
- Abstract summary: Adrial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs)
We present the first large-scale systematic study on the robustness of DNN architecture components under fixed parameter budgets.
We demonstrate 18 actionable robust network design guidelines that empower model developers to gain deep insights.
- Score: 7.7720465119590845
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Adversarial Training is the most effective approach for improving the
robustness of Deep Neural Networks (DNNs). However, compared to the large body
of research in optimizing the adversarial training process, there are few
investigations into how architecture components affect robustness, and they
rarely constrain model capacity. Thus, it is unclear where robustness precisely
comes from. In this work, we present the first large-scale systematic study on
the robustness of DNN architecture components under fixed parameter budgets.
Through our investigation, we distill 18 actionable robust network design
guidelines that empower model developers to gain deep insights. We demonstrate
these guidelines' effectiveness by introducing the novel Robust Architecture
(RobArch) model that instantiates the guidelines to build a family of
top-performing models across parameter capacities against strong adversarial
attacks. RobArch achieves the new state-of-the-art AutoAttack accuracy on the
RobustBench ImageNet leaderboard. The code is available at
$\href{https://github.com/ShengYun-Peng/RobArch}{\text{this url}}$.
Related papers
- Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Neural Architecture Design and Robustness: A Dataset [11.83842808044211]
We introduce a database on neural architecture design and robustness evaluations.
We evaluate all these networks on a range of common adversarial attacks and corruption types.
We find that carefully crafting the topology of a network can have substantial impact on its robustness.
arXiv Detail & Related papers (2023-06-11T16:02:14Z) - Revisiting Residual Networks for Adversarial Robustness: An
Architectural Perspective [22.59262601575886]
We focus on residual networks and consider architecture design at the block level, i.e., topology, kernel size, activation, and normalization.
We present a portfolio of adversarially robust residual networks, RobustResNets, spanning a broad spectrum of model capacities.
arXiv Detail & Related papers (2022-12-21T13:19:25Z) - RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
DNN [28.94653593443991]
Recently backdoor attack has become an emerging threat to the security of deep neural network (DNN) models.
In this paper, we propose to study and develop Robust and Imperceptible Backdoor Attack against Compact DNN models (RIBAC)
arXiv Detail & Related papers (2022-08-22T21:27:09Z) - Neural Architecture Search for Speech Emotion Recognition [72.1966266171951]
We propose to apply neural architecture search (NAS) techniques to automatically configure the SER models.
We show that NAS can improve SER performance (54.89% to 56.28%) while maintaining model parameter sizes.
arXiv Detail & Related papers (2022-03-31T10:16:10Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - RobustART: Benchmarking Robustness on Architecture Design and Training
Techniques [170.3297213957074]
Deep neural networks (DNNs) are vulnerable to adversarial noises.
There are no comprehensive studies of how architecture design and training techniques affect robustness.
We propose the first comprehensiveness investigation benchmark on ImageNet.
arXiv Detail & Related papers (2021-09-11T08:01:14Z) - A Design Space Study for LISTA and Beyond [79.76740811464597]
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms.
This paper revisits the role of unrolling as a design approach for deep networks, to what extent its resulting special architecture is superior, and can we find better?
Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models.
arXiv Detail & Related papers (2021-04-08T23:01:52Z) - On Adversarial Robustness: A Neural Architecture Search perspective [20.478741635006113]
This work is the first large-scale study to understand adversarial robustness purely from an architectural perspective.
We show that random sampling in the search space of DARTS with simple ensembling can improve the robustness to PGD attack by nearly12%.
We show that NAS, which is popular for achieving SoTA accuracy, can provide adversarial accuracy as a free add-on without any form of adversarial training.
arXiv Detail & Related papers (2020-07-16T16:07:10Z) - Improved Adversarial Training via Learned Optimizer [101.38877975769198]
We propose a framework to improve the robustness of adversarial training models.
By co-training's parameters model's weights, the proposed framework consistently improves robustness and steps adaptively for update directions.
arXiv Detail & Related papers (2020-04-25T20:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.