Speeding up NAS with Adaptive Subset Selection
- URL: http://arxiv.org/abs/2211.01454v1
- Date: Wed, 2 Nov 2022 19:48:42 GMT
- Title: Speeding up NAS with Adaptive Subset Selection
- Authors: Vishak Prasad C, Colin White, Paarth Jain, Sibasis Nayak, Ganesh
Ramakrishnan
- Abstract summary: We present an adaptive subset selection approach to neural architecture search (NAS)
We devise an algorithm that makes use of state-of-the-art techniques from both areas.
Our results are consistent across multiple datasets.
- Score: 21.31075249079979
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: A majority of recent developments in neural architecture search (NAS) have
been aimed at decreasing the computational cost of various techniques without
affecting their final performance. Towards this goal, several low-fidelity and
performance prediction methods have been considered, including those that train
only on subsets of the training data. In this work, we present an adaptive
subset selection approach to NAS and present it as complementary to
state-of-the-art NAS approaches. We uncover a natural connection between
one-shot NAS algorithms and adaptive subset selection and devise an algorithm
that makes use of state-of-the-art techniques from both areas. We use these
techniques to substantially reduce the runtime of DARTS-PT (a leading one-shot
NAS algorithm), as well as BOHB and DEHB (leading multifidelity optimization
algorithms), without sacrificing accuracy. Our results are consistent across
multiple datasets, and towards full reproducibility, we release our code at
https: //anonymous.4open.science/r/SubsetSelection NAS-B132.
Related papers
- DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models [56.584561770857306]
We propose a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG.
Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them.
We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS.
When integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset.
arXiv Detail & Related papers (2023-05-26T13:58:18Z) - Generalization Properties of NAS under Activation and Skip Connection
Search [66.8386847112332]
We study the generalization properties of Neural Architecture Search (NAS) under a unifying framework.
We derive the lower (and upper) bounds of the minimum eigenvalue of the Neural Tangent Kernel (NTK) under the (in)finite-width regime.
We show how the derived results can guide NAS to select the top-performing architectures, even in the case without training.
arXiv Detail & Related papers (2022-09-15T12:11:41Z) - U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search [50.33956216274694]
optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
We propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization.
We achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods.
arXiv Detail & Related papers (2022-03-23T13:44:15Z) - TND-NAS: Towards Non-differentiable Objectives in Progressive
Differentiable NAS Framework [6.895590095853327]
Differentiable architecture search has gradually become the mainstream research topic in the field of Neural Architecture Search (NAS)
Recent differentiable NAS also aims at further improving the search performance and reducing the GPU-memory consumption.
We propose the TND-NAS, which is with the merits of the high efficiency in differentiable NAS framework and the compatibility among non-differentiable metrics in Multi-objective NAS.
arXiv Detail & Related papers (2021-11-06T14:19:36Z) - A Data-driven Approach to Neural Architecture Search Initialization [12.901952926144258]
We propose a data-driven technique to initialize a population-based NAS algorithm.
We benchmark our proposed approach against random and Latin hypercube sampling.
arXiv Detail & Related papers (2021-11-05T14:30:19Z) - Lessons from the Clustering Analysis of a Search Space: A Centroid-based
Approach to Initializing NAS [12.901952926144258]
Recent availability of NAS benchmarks have enabled low computational resources prototyping.
A calibrated clustering analysis of the search space is performed.
Second, the centroids are extracted and used to initialize a NAS algorithm.
arXiv Detail & Related papers (2021-08-20T11:46:33Z) - Accelerating Neural Architecture Search via Proxy Data [17.86463546971522]
We propose a novel proxy data selection method tailored for neural architecture search (NAS)
executing DARTS with the proposed selection requires only 40 minutes on CIFAR-10 and 7.5 hours on ImageNet with a single GPU.
When the architecture searched on ImageNet using the proposed selection is inversely transferred to CIFAR-10, a state-of-the-art test error of 2.4% is yielded.
arXiv Detail & Related papers (2021-06-09T03:08:53Z) - Hyperparameter Optimization in Neural Networks via Structured Sparse
Recovery [54.60327265077322]
We study two important problems in the automated design of neural networks through the lens of sparse recovery methods.
In the first part of this paper, we establish a novel connection between HPO and structured sparse recovery.
In the second part of this paper, we establish a connection between NAS and structured sparse recovery.
arXiv Detail & Related papers (2020-07-07T00:57:09Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.