Generalizable Lightweight Proxy for Robust NAS against Diverse
Perturbations
- URL: http://arxiv.org/abs/2306.05031v2
- Date: Fri, 20 Oct 2023 06:22:12 GMT
- Title: Generalizable Lightweight Proxy for Robust NAS against Diverse
Perturbations
- Authors: Hyeonjeong Ha, Minseon Kim, Sung Ju Hwang
- Abstract summary: Recent neural architecture search (NAS) frameworks have been successful in finding optimal architectures for given conditions.
We propose a novel lightweight robust zero-cost proxy that considers the consistency across features, parameters, and gradients of both clean and perturbed images.
Our approach facilitates an efficient and rapid search for neural architectures capable of learning generalizable features that exhibit robustness across diverse perturbations.
- Score: 59.683234126055694
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent neural architecture search (NAS) frameworks have been successful in
finding optimal architectures for given conditions (e.g., performance or
latency). However, they search for optimal architectures in terms of their
performance on clean images only, while robustness against various types of
perturbations or corruptions is crucial in practice. Although there exist
several robust NAS frameworks that tackle this issue by integrating adversarial
training into one-shot NAS, however, they are limited in that they only
consider robustness against adversarial attacks and require significant
computational resources to discover optimal architectures for a single task,
which makes them impractical in real-world scenarios. To address these
challenges, we propose a novel lightweight robust zero-cost proxy that
considers the consistency across features, parameters, and gradients of both
clean and perturbed images at the initialization state. Our approach
facilitates an efficient and rapid search for neural architectures capable of
learning generalizable features that exhibit robustness across diverse
perturbations. The experimental results demonstrate that our proxy can rapidly
and efficiently search for neural architectures that are consistently robust
against various perturbations on multiple benchmark datasets and diverse search
spaces, largely outperforming existing clean zero-shot NAS and robust NAS with
reduced search cost.
Related papers
- Robust Neural Architecture Search [19.214462477848535]
We propose a novel NAS method, Robust Neural Architecture Search (RNAS)
To design a regularization term to balance accuracy and robustness, RNAS generates architectures with both high accuracy and good robustness.
Experiments show that RNAS achieves state-of-the-art (SOTA) performance on both image classification and adversarial attacks.
arXiv Detail & Related papers (2023-04-06T03:21:24Z) - $\beta$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable
Architecture Search [96.99525100285084]
Regularization method, Beta-Decay, is proposed to regularize the DARTS-based NAS searching process (i.e., $beta$-DARTS)
In-depth theoretical analyses on how it works and why it works are provided.
arXiv Detail & Related papers (2023-01-16T12:30:32Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - Elastic Architecture Search for Diverse Tasks with Different Resources [87.23061200971912]
We study a new challenging problem of efficient deployment for diverse tasks with different resources, where the resource constraint and task of interest corresponding to a group of classes are dynamically specified at testing time.
Previous NAS approaches seek to design architectures for all classes simultaneously, which may not be optimal for some individual tasks.
We present a novel and general framework, called Elastic Architecture Search (EAS), permitting instant specializations at runtime for diverse tasks with various resource constraints.
arXiv Detail & Related papers (2021-08-03T00:54:27Z) - Zero-Cost Proxies Meet Differentiable Architecture Search [20.957570100784988]
Differentiable neural architecture search (NAS) has attracted significant attention in recent years.
Despite its success, DARTS lacks robustness in certain cases.
We propose a novel operation selection paradigm in the context of differentiable NAS.
arXiv Detail & Related papers (2021-06-12T15:33:36Z) - Effective, Efficient and Robust Neural Architecture Search [4.273005643715522]
Recent advances in adversarial attacks show the vulnerability of deep neural networks searched by Neural Architecture Search (NAS)
We propose an Effective, Efficient, and Robust Neural Architecture Search (E2RNAS) method to search a neural network architecture by taking the performance, robustness, and resource constraint into consideration.
Experiments on benchmark datasets show that the proposed E2RNAS method can find adversarially robust architectures with optimized model size and comparable classification accuracy.
arXiv Detail & Related papers (2020-11-19T13:46:23Z) - On Adversarial Robustness: A Neural Architecture Search perspective [20.478741635006113]
This work is the first large-scale study to understand adversarial robustness purely from an architectural perspective.
We show that random sampling in the search space of DARTS with simple ensembling can improve the robustness to PGD attack by nearly12%.
We show that NAS, which is popular for achieving SoTA accuracy, can provide adversarial accuracy as a free add-on without any form of adversarial training.
arXiv Detail & Related papers (2020-07-16T16:07:10Z) - Powering One-shot Topological NAS with Stabilized Share-parameter Proxy [65.09967910722932]
One-shot NAS method has attracted much interest from the research community due to its remarkable training efficiency and capacity to discover high performance models.
In this work, we try to enhance the one-shot NAS by exploring high-performing network architectures in our large-scale Topology Augmented Search Space.
The proposed method achieves state-of-the-art performance under Multiply-Adds (MAdds) constraint on ImageNet.
arXiv Detail & Related papers (2020-05-21T08:18:55Z) - Geometry-Aware Gradient Algorithms for Neural Architecture Search [41.943045315986744]
We argue for the study of single-level empirical risk minimization to understand NAS with weight-sharing.
We present a geometry-aware framework that exploits the underlying structure of this optimization to return sparse architectural parameters.
We achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision.
arXiv Detail & Related papers (2020-04-16T17:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.