On Adversarial Robustness: A Neural Architecture Search perspective
- URL: http://arxiv.org/abs/2007.08428v4
- Date: Thu, 26 Aug 2021 09:01:16 GMT
- Title: On Adversarial Robustness: A Neural Architecture Search perspective
- Authors: Chaitanya Devaguptapu, Devansh Agarwal, Gaurav Mittal, Pulkit
Gopalani, Vineeth N Balasubramanian
- Abstract summary: This work is the first large-scale study to understand adversarial robustness purely from an architectural perspective.
We show that random sampling in the search space of DARTS with simple ensembling can improve the robustness to PGD attack by nearly12%.
We show that NAS, which is popular for achieving SoTA accuracy, can provide adversarial accuracy as a free add-on without any form of adversarial training.
- Score: 20.478741635006113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness of deep learning models has gained much traction in
the last few years. Various attacks and defenses are proposed to improve the
adversarial robustness of modern-day deep learning architectures. While all
these approaches help improve the robustness, one promising direction for
improving adversarial robustness is unexplored, i.e., the complex topology of
the neural network architecture. In this work, we address the following
question: Can the complex topology of a neural network give adversarial
robustness without any form of adversarial training?. We answer this
empirically by experimenting with different hand-crafted and NAS-based
architectures. Our findings show that, for small-scale attacks, NAS-based
architectures are more robust for small-scale datasets and simple tasks than
hand-crafted architectures. However, as the size of the dataset or the
complexity of task increases, hand-crafted architectures are more robust than
NAS-based architectures. Our work is the first large-scale study to understand
adversarial robustness purely from an architectural perspective. Our study
shows that random sampling in the search space of DARTS (a popular NAS method)
with simple ensembling can improve the robustness to PGD attack by nearly~12\%.
We show that NAS, which is popular for achieving SoTA accuracy, can provide
adversarial accuracy as a free add-on without any form of adversarial training.
Our results show that leveraging the search space of NAS methods with methods
like ensembles can be an excellent way to achieve adversarial robustness
without any form of adversarial training. We also introduce a metric that can
be used to calculate the trade-off between clean accuracy and adversarial
robustness. Code and pre-trained models will be made available at
\url{https://github.com/tdchaitanya/nas-robustness}
Related papers
- Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness [32.914986455418]
We propose a Reinforced Compressive Neural Architecture Search (RC-NAS) for Versatile Adversarial Robustness.
Specifically, we define task settings that compose datasets, adversarial attacks, and teacher network information.
Experiments show that our framework could achieve adaptive compression towards different initial teacher networks, datasets, and adversarial attacks.
arXiv Detail & Related papers (2024-06-10T20:59:52Z) - Towards Accurate and Robust Architectures via Neural Architecture Search [3.4014222238829497]
adversarial training improves accuracy and robustness by adjusting the weight connection affiliated to the architecture.
We propose ARNAS to search for accurate and robust architectures for adversarial training.
arXiv Detail & Related papers (2024-05-09T02:16:50Z) - Robust NAS under adversarial training: benchmark, theory, and beyond [55.51199265630444]
We release a comprehensive data set that encompasses both clean accuracy and robust accuracy for a vast array of adversarially trained networks.
We also establish a generalization theory for searching architecture in terms of clean accuracy and robust accuracy under multi-objective adversarial training.
arXiv Detail & Related papers (2024-03-19T20:10:23Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Generalizable Lightweight Proxy for Robust NAS against Diverse
Perturbations [59.683234126055694]
Recent neural architecture search (NAS) frameworks have been successful in finding optimal architectures for given conditions.
We propose a novel lightweight robust zero-cost proxy that considers the consistency across features, parameters, and gradients of both clean and perturbed images.
Our approach facilitates an efficient and rapid search for neural architectures capable of learning generalizable features that exhibit robustness across diverse perturbations.
arXiv Detail & Related papers (2023-06-08T08:34:26Z) - Efficient Search of Comprehensively Robust Neural Architectures via
Multi-fidelity Evaluation [1.9100854225243937]
We propose a novel efficient search of comprehensively robust neural architectures via multi-fidelity evaluation (ES-CRNA-ME)
Specifically, we first search for comprehensively robust architectures under multiple types of evaluations using the weight-sharing-based NAS method.
We reduce the number of robustness evaluations by the correlation analysis, which can incorporate similar evaluations and decrease the evaluation cost.
arXiv Detail & Related papers (2023-05-12T08:28:58Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - DSRNA: Differentiable Search of Robust Neural Architectures [11.232234265070753]
In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
arXiv Detail & Related papers (2020-12-11T04:52:54Z) - Disturbance-immune Weight Sharing for Neural Architecture Search [96.93812980299428]
We propose a disturbance-immune update strategy for model updating.
We theoretically analyze the effectiveness of our strategy in alleviating the performance disturbance risk.
arXiv Detail & Related papers (2020-03-29T17:54:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.