Core-set Sampling for Efficient Neural Architecture Search
- URL: http://arxiv.org/abs/2107.06869v1
- Date: Thu, 8 Jul 2021 06:19:18 GMT
- Title: Core-set Sampling for Efficient Neural Architecture Search
- Authors: Jae-hun Shim, Kyeongbo Kong, and Suk-Ju Kang
- Abstract summary: This paper attempts to formulate the problem based on the data curation manner.
Our key strategy is to search the architecture using summarized data distribution, i.e., core-set.
In our experiments, we were able to save overall computational time from 30.8 hours to 3.5 hours, 8.8x reduction.
- Score: 12.272975892517039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS), an important branch of automatic machine
learning, has become an effective approach to automate the design of deep
learning models. However, the major issue in NAS is how to reduce the large
search time imposed by the heavy computational burden. While most recent
approaches focus on pruning redundant sets or developing new search
methodologies, this paper attempts to formulate the problem based on the data
curation manner. Our key strategy is to search the architecture using
summarized data distribution, i.e., core-set. Typically, many NAS algorithms
separate searching and training stages, and the proposed core-set methodology
is only used in search stage, thus their performance degradation can be
minimized. In our experiments, we were able to save overall computational time
from 30.8 hours to 3.5 hours, 8.8x reduction, on a single RTX 3090 GPU without
sacrificing accuracy.
Related papers
- Graph is all you need? Lightweight data-agnostic neural architecture search without training [45.79667238486864]
Neural architecture search (NAS) enables the automatic design of neural network models.
Our method, dubbed nasgraph, remarkably reduces the computational costs by converting neural architectures to graphs.
It can find the best architecture among 200 randomly sampled architectures from NAS-Bench201 in 217 CPU seconds.
arXiv Detail & Related papers (2024-05-02T14:12:58Z) - Efficient Architecture Search for Diverse Tasks [29.83517145790238]
We study neural architecture search (NAS) for efficiently solving diverse problems.
We introduce DASH, a differentiable NAS algorithm that computes the mixture-of-operations using the Fourier diagonalization of convolution.
We evaluate DASH-Bench-360, a suite of ten tasks designed for NAS benchmarking in diverse domains.
arXiv Detail & Related papers (2022-04-15T17:21:27Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - FNAS: Uncertainty-Aware Fast Neural Architecture Search [54.49650267859032]
Reinforcement learning (RL)-based neural architecture search (NAS) generally guarantees better convergence yet suffers from the requirement of huge computational resources.
We propose a general pipeline to accelerate the convergence of the rollout process as well as the RL process in NAS.
Experiments on the Mobile Neural Architecture Search (MNAS) search space show the proposed Fast Neural Architecture Search (FNAS) accelerates standard RL-based NAS process by 10x.
arXiv Detail & Related papers (2021-05-25T06:32:52Z) - Efficient Model Performance Estimation via Feature Histories [27.008927077173553]
An important step in the task of neural network design is the evaluation of a model's performance.
In this work, we use the evolution history of features of a network during the early stages of training to build a proxy classifier.
We show that our method can be combined with multiple search algorithms to find better solutions to a wide range of tasks.
arXiv Detail & Related papers (2021-03-07T20:41:57Z) - Contrastive Self-supervised Neural Architecture Search [6.162410142452926]
This paper proposes a novel cell-based neural architecture search algorithm (NAS)
Our algorithm capitalizes on the effectiveness of self-supervised learning for image representations.
An extensive number of experiments empirically show that our search algorithm can achieve state-of-the-art results.
arXiv Detail & Related papers (2021-02-21T08:38:28Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.