RobustART: Benchmarking Robustness on Architecture Design and Training
Techniques
- URL: http://arxiv.org/abs/2109.05211v2
- Date: Wed, 15 Sep 2021 08:15:57 GMT
- Title: RobustART: Benchmarking Robustness on Architecture Design and Training
Techniques
- Authors: Shiyu Tang and Ruihao Gong and Yan Wang and Aishan Liu and Jiakai Wang
and Xinyun Chen and Fengwei Yu and Xianglong Liu and Dawn Song and Alan
Yuille and Philip H.S. Torr and Dacheng Tao
- Abstract summary: Deep neural networks (DNNs) are vulnerable to adversarial noises.
There are no comprehensive studies of how architecture design and training techniques affect robustness.
We propose the first comprehensiveness investigation benchmark on ImageNet.
- Score: 170.3297213957074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are vulnerable to adversarial noises, which
motivates the benchmark of model robustness. Existing benchmarks mainly focus
on evaluating the defenses, but there are no comprehensive studies of how
architecture design and general training techniques affect robustness.
Comprehensively benchmarking their relationships will be highly beneficial for
better understanding and developing robust DNNs. Thus, we propose RobustART,
the first comprehensive Robustness investigation benchmark on ImageNet
(including open-source toolkit, pre-trained model zoo, datasets, and analyses)
regarding ARchitecture design (44 human-designed off-the-shelf architectures
and 1200+ networks from neural architecture search) and Training techniques
(10+ general techniques, e.g., data augmentation) towards diverse noises
(adversarial, natural, and system noises). Extensive experiments revealed and
substantiated several insights for the first time, for example: (1) adversarial
training largely improves the clean accuracy and all types of robustness for
Transformers and MLP-Mixers; (2) with comparable sizes, CNNs > Transformers >
MLP-Mixers on robustness against natural and system noises; Transformers >
MLP-Mixers > CNNs on adversarial robustness; (3) for some light-weight
architectures (e.g., EfficientNet, MobileNetV2, and MobileNetV3), increasing
model sizes or using extra training data cannot improve robustness. Our
benchmark http://robust.art/ : (1) presents an open-source platform for
conducting comprehensive evaluation on diverse robustness types; (2) provides a
variety of pre-trained models with different training techniques to facilitate
robustness evaluation; (3) proposes a new view to better understand the
mechanism towards designing robust DNN architectures, backed up by the
analysis. We will continuously contribute to building this ecosystem for the
community.
Related papers
- Neural Architecture Design and Robustness: A Dataset [11.83842808044211]
We introduce a database on neural architecture design and robustness evaluations.
We evaluate all these networks on a range of common adversarial attacks and corruption types.
We find that carefully crafting the topology of a network can have substantial impact on its robustness.
arXiv Detail & Related papers (2023-06-11T16:02:14Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Differentiable Search of Accurate and Robust Architectures [22.435774101990752]
adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
arXiv Detail & Related papers (2022-12-28T08:36:36Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - A Battle of Network Structures: An Empirical Study of CNN, Transformer,
and MLP [121.35904748477421]
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision.
Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and Vision-Mixer, started to lead new trends.
In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons.
arXiv Detail & Related papers (2021-08-30T06:09:02Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - DSRNA: Differentiable Search of Robust Neural Architectures [11.232234265070753]
In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
arXiv Detail & Related papers (2020-12-11T04:52:54Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.