Heed the Noise in Performance Evaluations in Neural Architecture Search
- URL: http://arxiv.org/abs/2202.02078v1
- Date: Fri, 4 Feb 2022 11:20:46 GMT
- Title: Heed the Noise in Performance Evaluations in Neural Architecture Search
- Authors: Arkadiy Dushatskiy, Tanja Alderliesten, Peter A. N. Bosman
- Abstract summary: Neural Architecture Search (NAS) has recently become a topic of great interest.
There is a potentially impactful issue within NAS that remains largely unrecognized: noise.
We propose to reduce the noise by having architecture evaluations averaging of scores over multiple network training runs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Architecture Search (NAS) has recently become a topic of great
interest. However, there is a potentially impactful issue within NAS that
remains largely unrecognized: noise. Due to stochastic factors in neural
network initialization, training, and the chosen train/validation dataset
split, the performance evaluation of a neural network architecture, which is
often based on a single learning run, is also stochastic. This may have a
particularly large impact if a dataset is small. We therefore propose to reduce
the noise by having architecture evaluations comprise averaging of scores over
multiple network training runs using different random seeds and
cross-validation. We perform experiments for a combinatorial optimization
formulation of NAS in which we vary noise reduction levels. We use the same
computational budget for each noise level in terms of network training runs,
i.e., we allow less architecture evaluations when averaging over more training
runs. Multiple search algorithms are considered, including evolutionary
algorithms which generally perform well for NAS. We use two publicly available
datasets from the medical image segmentation domain where datasets are often
limited and variability among samples is often high. Our results show that
reducing noise in architecture evaluations enables finding better architectures
by all considered search algorithms.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Multi-objective Neural Architecture Search with Almost No Training [9.93048700248444]
We propose an effective alternative, dubbed Random-Weight Evaluation (RWE), to rapidly estimate the performance of network architectures.
RWE reduces the computational cost of evaluating an architecture from hours to seconds.
When integrated within an evolutionary multi-objective algorithm, RWE obtains a set of efficient architectures with state-of-the-art performance on CIFAR-10 with less than two hours' searching on a single GPU card.
arXiv Detail & Related papers (2020-11-27T07:39:17Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language
Processing [12.02718579660613]
We step outside the computer vision domain by leveraging the language modeling task, which is the core of natural language processing (NLP)
We have provided search space of recurrent neural networks on the text datasets and trained 14k architectures within it.
We have conducted both intrinsic and extrinsic evaluation of the trained models using datasets for semantic relatedness and language understanding evaluation.
arXiv Detail & Related papers (2020-06-12T12:19:06Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Exploring the Loss Landscape in Neural Architecture Search [15.830099254570959]
We show that the simplest hill-climbing algorithm is a powerful baseline for NAS.
We also show that the number of local minima is substantially reduced as the noise decreases.
arXiv Detail & Related papers (2020-05-06T17:09:16Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.