Differentiable Search of Accurate and Robust Architectures
- URL: http://arxiv.org/abs/2212.14049v1
- Date: Wed, 28 Dec 2022 08:36:36 GMT
- Title: Differentiable Search of Accurate and Robust Architectures
- Authors: Yuwei Ou, Xiangning Xie, Shangce Gao, Yanan Sun, Kay Chen Tan,
Jiancheng Lv
- Abstract summary: adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
- Score: 22.435774101990752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are found to be vulnerable to adversarial
attacks, and various methods have been proposed for the defense. Among these
methods, adversarial training has been drawing increasing attention because of
its simplicity and effectiveness. However, the performance of the adversarial
training is greatly limited by the architectures of target DNNs, which often
makes the resulting DNNs with poor accuracy and unsatisfactory robustness. To
address this problem, we propose DSARA to automatically search for the neural
architectures that are accurate and robust after adversarial training. In
particular, we design a novel cell-based search space specially for adversarial
training, which improves the accuracy and the robustness upper bound of the
searched architectures by carefully designing the placement of the cells and
the proportional relationship of the filter numbers. Then we propose a
two-stage search strategy to search for both accurate and robust neural
architectures. At the first stage, the architecture parameters are optimized to
minimize the adversarial loss, which makes full use of the effectiveness of the
adversarial training in enhancing the robustness. At the second stage, the
architecture parameters are optimized to minimize both the natural loss and the
adversarial loss utilizing the proposed multi-objective adversarial training
method, so that the searched neural architectures are both accurate and robust.
We evaluate the proposed algorithm under natural data and various adversarial
attacks, which reveals the superiority of the proposed method in terms of both
accurate and robust architectures. We also conclude that accurate and robust
neural architectures tend to deploy very different structures near the input
and the output, which has great practical significance on both hand-crafting
and automatically designing of accurate and robust neural architectures.
Related papers
- Reinforced Compressive Neural Architecture Search for Versatile Adversarial Robustness [32.914986455418]
We propose a Reinforced Compressive Neural Architecture Search (RC-NAS) for Versatile Adversarial Robustness.
Specifically, we define task settings that compose datasets, adversarial attacks, and teacher network information.
Experiments show that our framework could achieve adaptive compression towards different initial teacher networks, datasets, and adversarial attacks.
arXiv Detail & Related papers (2024-06-10T20:59:52Z) - Towards Accurate and Robust Architectures via Neural Architecture Search [3.4014222238829497]
adversarial training improves accuracy and robustness by adjusting the weight connection affiliated to the architecture.
We propose ARNAS to search for accurate and robust architectures for adversarial training.
arXiv Detail & Related papers (2024-05-09T02:16:50Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Bi-fidelity Evolutionary Multiobjective Search for Adversarially Robust
Deep Neural Architectures [19.173285459139592]
This paper proposes a bi-fidelity multiobjective neural architecture search approach.
In addition to a low-fidelity performance predictor, we leverage an auxiliary-objective -- the value of which is the output of a surrogate model trained with high-fidelity evaluations.
The effectiveness of the proposed approach is confirmed by extensive experiments conducted on CIFAR-10, CIFAR-100 and SVHN datasets.
arXiv Detail & Related papers (2022-07-12T05:26:09Z) - Neural Architecture Search for Speech Emotion Recognition [72.1966266171951]
We propose to apply neural architecture search (NAS) techniques to automatically configure the SER models.
We show that NAS can improve SER performance (54.89% to 56.28%) while maintaining model parameter sizes.
arXiv Detail & Related papers (2022-03-31T10:16:10Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - RobustART: Benchmarking Robustness on Architecture Design and Training
Techniques [170.3297213957074]
Deep neural networks (DNNs) are vulnerable to adversarial noises.
There are no comprehensive studies of how architecture design and training techniques affect robustness.
We propose the first comprehensiveness investigation benchmark on ImageNet.
arXiv Detail & Related papers (2021-09-11T08:01:14Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - DSRNA: Differentiable Search of Robust Neural Architectures [11.232234265070753]
In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
arXiv Detail & Related papers (2020-12-11T04:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.