Less is More: Proxy Datasets in NAS approaches
- URL: http://arxiv.org/abs/2203.06905v1
- Date: Mon, 14 Mar 2022 07:58:12 GMT
- Title: Less is More: Proxy Datasets in NAS approaches
- Authors: Brian Moser, Federico Raue, J\"orn Hees, Andreas Dengel
- Abstract summary: Neural Architecture Search (NAS) defines the design of Neural Networks as a search problem.
NAS is computationally intensive because of various possibilities depending on the number of elements in the design.
We extensively analyze the role of the dataset size based on several sampling approaches for reducing the dataset size.
- Score: 4.266320191208303
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Architecture Search (NAS) defines the design of Neural Networks as a
search problem. Unfortunately, NAS is computationally intensive because of
various possibilities depending on the number of elements in the design and the
possible connections between them. In this work, we extensively analyze the
role of the dataset size based on several sampling approaches for reducing the
dataset size (unsupervised and supervised cases) as an agnostic approach to
reduce search time. We compared these techniques with four common NAS
approaches in NAS-Bench-201 in roughly 1,400 experiments on CIFAR-100. One of
our surprising findings is that in most cases we can reduce the amount of
training data to 25\%, consequently reducing search time to 25\%, while at the
same time maintaining the same accuracy as if training on the full dataset.
Additionally, some designs derived from subsets out-perform designs derived
from the full dataset by up to 22 p.p. accuracy.
Related papers
- Graph is all you need? Lightweight data-agnostic neural architecture search without training [45.79667238486864]
Neural architecture search (NAS) enables the automatic design of neural network models.
Our method, dubbed nasgraph, remarkably reduces the computational costs by converting neural architectures to graphs.
It can find the best architecture among 200 randomly sampled architectures from NAS-Bench201 in 217 CPU seconds.
arXiv Detail & Related papers (2024-05-02T14:12:58Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - Reducing Neural Architecture Search Spaces with Training-Free Statistics
and Computational Graph Clustering [12.588898262943218]
Clustering-Based REDuction (C-BRED) is a new technique to reduce the size of NAS search spaces.
C-BRED selects a subset with 70% average accuracy instead of the whole space's 64% average accuracy.
arXiv Detail & Related papers (2022-04-29T13:52:35Z) - When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - Rapid Neural Architecture Search by Learning to Generate Graphs from
Datasets [42.993720854755736]
We propose an efficient Neural Search (NAS) framework that is trained once on a database consisting of datasets and pretrained networks.
We show that our model meta-learned on subsets of ImageNet-1K and architectures from NAS-Bench 201 search space successfully generalizes to multiple unseen datasets.
arXiv Detail & Related papers (2021-07-02T06:33:59Z) - Accelerating Neural Architecture Search via Proxy Data [17.86463546971522]
We propose a novel proxy data selection method tailored for neural architecture search (NAS)
executing DARTS with the proposed selection requires only 40 minutes on CIFAR-10 and 7.5 hours on ImageNet with a single GPU.
When the architecture searched on ImageNet using the proposed selection is inversely transferred to CIFAR-10, a state-of-the-art test error of 2.4% is yielded.
arXiv Detail & Related papers (2021-06-09T03:08:53Z) - Accuracy Prediction with Non-neural Model for Neural Architecture Search [185.0651567642238]
We study an alternative approach which uses non-neural model for accuracy prediction.
We leverage gradient boosting decision tree (GBDT) as the predictor for Neural architecture search (NAS)
Experiments on NASBench-101 and ImageNet demonstrate the effectiveness of using GBDT as predictor for NAS.
arXiv Detail & Related papers (2020-07-09T13:28:49Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - DSNAS: Direct Neural Architecture Search without Parameter Retraining [112.02966105995641]
We propose a new problem definition for NAS, task-specific end-to-end, based on this observation.
We propose DSNAS, an efficient differentiable NAS framework that simultaneously optimize architecture and parameters with a low-biased Monte Carlo estimate.
DSNAS successfully discovers networks with comparable accuracy (74.4%) on ImageNet in 420 GPU hours, reducing the total time by more than 34%.
arXiv Detail & Related papers (2020-02-21T04:41:47Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.