DSRNA: Differentiable Search of Robust Neural Architectures
- URL: http://arxiv.org/abs/2012.06122v1
- Date: Fri, 11 Dec 2020 04:52:54 GMT
- Title: DSRNA: Differentiable Search of Robust Neural Architectures
- Authors: Ramtin Hosseini, Xingyi Yang and Pengtao Xie
- Abstract summary: In deep learning applications, the architectures of deep neural networks are crucial in achieving high accuracy.
We propose methods to perform differentiable search of robust neural architectures.
Our methods are more robust to various norm-bound attacks than several robust NAS baselines.
- Score: 11.232234265070753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In deep learning applications, the architectures of deep neural networks are
crucial in achieving high accuracy. Many methods have been proposed to search
for high-performance neural architectures automatically. However, these
searched architectures are prone to adversarial attacks. A small perturbation
of the input data can render the architecture to change prediction outcomes
significantly. To address this problem, we propose methods to perform
differentiable search of robust neural architectures. In our methods, two
differentiable metrics are defined to measure architectures' robustness, based
on certified lower bound and Jacobian norm bound. Then we search for robust
architectures by maximizing the robustness metrics. Different from previous
approaches which aim to improve architectures' robustness in an implicit way:
performing adversarial training and injecting random noise, our methods
explicitly and directly maximize robustness metrics to harvest robust
architectures. On CIFAR-10, ImageNet, and MNIST, we perform game-based
evaluation and verification-based evaluation on the robustness of our methods.
The experimental results show that our methods 1) are more robust to various
norm-bound attacks than several robust NAS baselines; 2) are more accurate than
baselines when there are no attacks; 3) have significantly higher certified
lower bounds than baselines.
Related papers
- Neural Architecture Design and Robustness: A Dataset [11.83842808044211]
We introduce a database on neural architecture design and robustness evaluations.
We evaluate all these networks on a range of common adversarial attacks and corruption types.
We find that carefully crafting the topology of a network can have substantial impact on its robustness.
arXiv Detail & Related papers (2023-06-11T16:02:14Z) - Efficient Search of Comprehensively Robust Neural Architectures via
Multi-fidelity Evaluation [1.9100854225243937]
We propose a novel efficient search of comprehensively robust neural architectures via multi-fidelity evaluation (ES-CRNA-ME)
Specifically, we first search for comprehensively robust architectures under multiple types of evaluations using the weight-sharing-based NAS method.
We reduce the number of robustness evaluations by the correlation analysis, which can incorporate similar evaluations and decrease the evaluation cost.
arXiv Detail & Related papers (2023-05-12T08:28:58Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Differentiable Search of Accurate and Robust Architectures [22.435774101990752]
adversarial training has been drawing increasing attention because of its simplicity and effectiveness.
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks.
We propose DSARA to automatically search for the neural architectures that are accurate and robust after adversarial training.
arXiv Detail & Related papers (2022-12-28T08:36:36Z) - Searching for Robust Neural Architectures via Comprehensive and Reliable
Evaluation [6.612134996737988]
We propose a novel framework, called Auto Adversarial Attack and Defense (AAAD), where we employ neural architecture search methods.
We consider four types of robustness evaluations, including adversarial noise, natural noise, system noise and quantified metrics.
The empirical results on the CIFAR10 dataset show that the searched efficient attack could help find more robust architectures.
arXiv Detail & Related papers (2022-03-07T04:45:05Z) - RobustART: Benchmarking Robustness on Architecture Design and Training
Techniques [170.3297213957074]
Deep neural networks (DNNs) are vulnerable to adversarial noises.
There are no comprehensive studies of how architecture design and training techniques affect robustness.
We propose the first comprehensiveness investigation benchmark on ImageNet.
arXiv Detail & Related papers (2021-09-11T08:01:14Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Rethinking Architecture Selection in Differentiable NAS [74.61723678821049]
Differentiable Neural Architecture Search is one of the most popular NAS methods for its search efficiency and simplicity.
We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet.
We find that several failure modes of DARTS can be greatly alleviated with the proposed selection method.
arXiv Detail & Related papers (2021-08-10T00:53:39Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Adversarially Robust Neural Architectures [43.74185132684662]
This paper aims to improve the adversarial robustness of the network from the architecture perspective with NAS framework.
We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters.
Our algorithm empirically achieves the best performance among all the models under various attacks on different datasets.
arXiv Detail & Related papers (2020-09-02T08:52:15Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.