Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search
- URL: http://arxiv.org/abs/2103.00363v1
- Date: Sun, 28 Feb 2021 00:54:09 GMT
- Title: Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search
- Authors: Guoyang Xie, Jinbao Wang, Guo Yu, Feng Zheng, Yaochu Jin
- Abstract summary: Most neural network models deployed in mobile devices are tiny. However, tiny neural networks are commonly very vulnerable to attacks.
Our work focuses on how to improve the robustness of tiny neural networks without seriously deteriorating of clean accuracy under mobile-level resources.
- Score: 35.362883630015354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to limited computational cost and energy consumption, most neural network
models deployed in mobile devices are tiny. However, tiny neural networks are
commonly very vulnerable to attacks. Current research has proved that larger
model size can improve robustness, but little research focuses on how to
enhance the robustness of tiny neural networks. Our work focuses on how to
improve the robustness of tiny neural networks without seriously deteriorating
of clean accuracy under mobile-level resources. To this end, we propose a
multi-objective oneshot network architecture search (NAS) algorithm to obtain
the best trade-off networks in terms of the adversarial accuracy, the clean
accuracy and the model size. Specifically, we design a novel search space based
on new tiny blocks and channels to balance model size and adversarial
performance. Moreover, since the supernet significantly affects the performance
of subnets in our NAS algorithm, we reveal the insights into how the supernet
helps to obtain the best subnet under white-box adversarial attacks.
Concretely, we explore a new adversarial training paradigm by analyzing the
adversarial transferability, the width of the supernet and the difference
between training the subnets from scratch and fine-tuning. Finally, we make a
statistical analysis for the layer-wise combination of certain blocks and
channels on the first non-dominated front, which can serve as a guideline to
design tiny neural network architectures for the resilience of adversarial
perturbations.
Related papers
- Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers [23.42376264664302]
Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
arXiv Detail & Related papers (2022-02-25T00:14:59Z) - A Layer-wise Adversarial-aware Quantization Optimization for Improving
Robustness [4.794745827538956]
We find that adversarially-trained neural networks are more vulnerable to quantization loss than plain models.
We propose a layer-wise adversarial-aware quantization method, using the Lipschitz constant to choose the best quantization parameter settings for a neural network.
Experiment results show that our method can effectively and efficiently improve the robustness of quantized adversarially-trained neural networks.
arXiv Detail & Related papers (2021-10-23T22:11:30Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Improving Neural Network Robustness through Neighborhood Preserving
Layers [0.751016548830037]
We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently.
We empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks.
arXiv Detail & Related papers (2021-01-28T01:26:35Z) - Do Wider Neural Networks Really Help Adversarial Robustness? [92.8311752980399]
We show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability.
We propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges $lambda$ on wide models.
arXiv Detail & Related papers (2020-10-03T04:46:17Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.