Robust Neural Architecture Search
- URL: http://arxiv.org/abs/2304.02845v2
- Date: Mon, 10 Apr 2023 07:10:01 GMT
- Title: Robust Neural Architecture Search
- Authors: Xunyu Zhu, Jian Li, Yong Liu, Weiping Wang
- Abstract summary: We propose a novel NAS method, Robust Neural Architecture Search (RNAS)
To design a regularization term to balance accuracy and robustness, RNAS generates architectures with both high accuracy and good robustness.
Experiments show that RNAS achieves state-of-the-art (SOTA) performance on both image classification and adversarial attacks.
- Score: 19.214462477848535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architectures Search (NAS) becomes more and more popular over these
years. However, NAS-generated models tends to suffer greater vulnerability to
various malicious attacks. Lots of robust NAS methods leverage adversarial
training to enhance the robustness of NAS-generated models, however, they
neglected the nature accuracy of NAS-generated models. In our paper, we propose
a novel NAS method, Robust Neural Architecture Search (RNAS). To design a
regularization term to balance accuracy and robustness, RNAS generates
architectures with both high accuracy and good robustness. To reduce search
cost, we further propose to use noise examples instead adversarial examples as
input to search architectures. Extensive experiments show that RNAS achieves
state-of-the-art (SOTA) performance on both image classification and
adversarial attacks, which illustrates the proposed RNAS achieves a good
tradeoff between robustness and accuracy.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - FlatNAS: optimizing Flatness in Neural Architecture Search for
Out-of-Distribution Robustness [3.724847012963521]
This study introduces a novel NAS solution, called Flat Neural Architecture Search (FlatNAS)
It explores the interplay between a novel figure of merit based on robustness to weight perturbations and single NN optimization with Sharpness-Aware Minimization (SAM)
The OOD robustness of the NAS-designed models is evaluated by focusing on robustness to input data corruptions, using popular benchmark datasets in the literature.
arXiv Detail & Related papers (2024-02-29T12:33:14Z) - Generalizable Lightweight Proxy for Robust NAS against Diverse
Perturbations [59.683234126055694]
Recent neural architecture search (NAS) frameworks have been successful in finding optimal architectures for given conditions.
We propose a novel lightweight robust zero-cost proxy that considers the consistency across features, parameters, and gradients of both clean and perturbed images.
Our approach facilitates an efficient and rapid search for neural architectures capable of learning generalizable features that exhibit robustness across diverse perturbations.
arXiv Detail & Related papers (2023-06-08T08:34:26Z) - PRE-NAS: Predictor-assisted Evolutionary Neural Architecture Search [34.06028035262884]
We propose a novel evolutionary-based NAS strategy, Predictor-assisted E-NAS (PRE-NAS)
PRE-NAS leverages new evolutionary search strategies and integrates high-fidelity weight inheritance over generations.
Experiments on NAS-Bench-201 and DARTS search spaces show that PRE-NAS can outperform state-of-the-art NAS methods.
arXiv Detail & Related papers (2022-04-27T06:40:39Z) - Searching Efficient Model-guided Deep Network for Image Denoising [61.65776576769698]
We present a novel approach by connecting model-guided design with NAS (MoD-NAS)
MoD-NAS employs a highly reusable width search strategy and a densely connected search block to automatically select the operations of each layer.
Experimental results on several popular datasets show that our MoD-NAS has achieved even better PSNR performance than current state-of-the-art methods.
arXiv Detail & Related papers (2021-04-06T14:03:01Z) - AdvantageNAS: Efficient Neural Architecture Search with Credit
Assignment [23.988393741948485]
We propose a novel search strategy for one-shot and sparse propagation NAS, namely AdvantageNAS.
AdvantageNAS is a gradient-based approach that improves the search efficiency by introducing credit assignment in gradient estimation for architecture updates.
Experiments on the NAS-Bench-201 and PTB dataset show that AdvantageNAS discovers an architecture with higher performance under a limited time budget.
arXiv Detail & Related papers (2020-12-11T05:45:03Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - Hyperparameter Optimization in Neural Networks via Structured Sparse
Recovery [54.60327265077322]
We study two important problems in the automated design of neural networks through the lens of sparse recovery methods.
In the first part of this paper, we establish a novel connection between HPO and structured sparse recovery.
In the second part of this paper, we establish a connection between NAS and structured sparse recovery.
arXiv Detail & Related papers (2020-07-07T00:57:09Z) - Powering One-shot Topological NAS with Stabilized Share-parameter Proxy [65.09967910722932]
One-shot NAS method has attracted much interest from the research community due to its remarkable training efficiency and capacity to discover high performance models.
In this work, we try to enhance the one-shot NAS by exploring high-performing network architectures in our large-scale Topology Augmented Search Space.
The proposed method achieves state-of-the-art performance under Multiply-Adds (MAdds) constraint on ImageNet.
arXiv Detail & Related papers (2020-05-21T08:18:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.