Exploring the Loss Landscape in Neural Architecture Search
- URL: http://arxiv.org/abs/2005.02960v3
- Date: Wed, 16 Jun 2021 17:41:03 GMT
- Title: Exploring the Loss Landscape in Neural Architecture Search
- Authors: Colin White, Sam Nolen, Yash Savani
- Abstract summary: We show that the simplest hill-climbing algorithm is a powerful baseline for NAS.
We also show that the number of local minima is substantially reduced as the noise decreases.
- Score: 15.830099254570959
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) has seen a steep rise in interest over the
last few years. Many algorithms for NAS consist of searching through a space of
architectures by iteratively choosing an architecture, evaluating its
performance by training it, and using all prior evaluations to come up with the
next choice. The evaluation step is noisy - the final accuracy varies based on
the random initialization of the weights. Prior work has focused on devising
new search algorithms to handle this noise, rather than quantifying or
understanding the level of noise in architecture evaluations. In this work, we
show that (1) the simplest hill-climbing algorithm is a powerful baseline for
NAS, and (2), when the noise in popular NAS benchmark datasets is reduced to a
minimum, hill-climbing to outperforms many popular state-of-the-art algorithms.
We further back up this observation by showing that the number of local minima
is substantially reduced as the noise decreases, and by giving a theoretical
characterization of the performance of local search in NAS. Based on our
findings, for NAS research we suggest (1) using local search as a baseline, and
(2) denoising the training pipeline when possible.
Related papers
- When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - Heed the Noise in Performance Evaluations in Neural Architecture Search [0.0]
Neural Architecture Search (NAS) has recently become a topic of great interest.
There is a potentially impactful issue within NAS that remains largely unrecognized: noise.
We propose to reduce the noise by having architecture evaluations averaging of scores over multiple network training runs.
arXiv Detail & Related papers (2022-02-04T11:20:46Z) - Towards a Robust Differentiable Architecture Search under Label Noise [44.86506257979988]
We show that vanilla NAS algorithms suffer from a performance loss if class labels are noisy.
Our empirical evaluations show that the noise injecting operation does not degrade the performance of the NAS algorithm if the data is indeed clean.
arXiv Detail & Related papers (2021-10-23T11:31:06Z) - Going Beyond Neural Architecture Search with Sampling-based Neural
Ensemble Search [31.059040393415003]
We present two novel sampling algorithms under our Neural Ensemble Search via Sampling (NESS) framework.
Our NESS algorithms are shown to be able to achieve improved performance in both classification and adversarial defense tasks.
arXiv Detail & Related papers (2021-09-06T15:18:37Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and
Size [31.903475598150152]
We propose NATS-Bench, a unified benchmark on searching for both architecture topology and size.
NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets.
arXiv Detail & Related papers (2020-08-28T21:34:56Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.