BenchENAS: A Benchmarking Platform for Evolutionary Neural Architecture
Search
- URL: http://arxiv.org/abs/2108.03856v2
- Date: Sat, 14 Aug 2021 14:49:57 GMT
- Title: BenchENAS: A Benchmarking Platform for Evolutionary Neural Architecture
Search
- Authors: Xiangning Xie, Yuqiao Liu, Yanan Sun, Gary G. Yen, Bing Xue and
Mengjie Zhang
- Abstract summary: evolutionary computation based NAS (ENAS) methods have recently gained much attention.
The issues of fair comparisons and efficient evaluations have hindered the development of ENAS.
This paper develops a platform named BenchENAS to address these issues.
- Score: 10.925662100634378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS), which automatically designs the
architectures of deep neural networks, has achieved breakthrough success over
many applications in the past few years. Among different classes of NAS
methods, evolutionary computation based NAS (ENAS) methods have recently gained
much attention. Unfortunately, the issues of fair comparisons and efficient
evaluations have hindered the development of ENAS. The current benchmark
architecture datasets designed for fair comparisons only provide the datasets,
not the ENAS algorithms or the platform to run the algorithms. The existing
efficient evaluation methods are either not suitable for the population-based
ENAS algorithm or are too complex to use. This paper develops a platform named
BenchENAS to address these issues. BenchENAS aims to achieve fair comparisons
by running different algorithms in the same environment and with the same
settings. To achieve efficient evaluation in a common lab environment,
BenchENAS designs a parallel component and a cache component with high
maintainability. Furthermore, BenchENAS is easy to install and highly
configurable and modular, which brings benefits in good usability and easy
extensibility. The paper conducts efficient comparison experiments on eight
ENAS algorithms with high GPU utilization on this platform. The experiments
validate that the fair comparison issue does exist, and BenchENAS can alleviate
this issue. A website has been built to promote BenchENAS at
https://benchenas.com, where interested researchers can obtain the source code
and document of BenchENAS for free.
Related papers
- Delta-NAS: Difference of Architecture Encoding for Predictor-based Evolutionary Neural Architecture Search [5.1331676121360985]
We craft an algorithm with the capability to perform fine-grain NAS at a low cost.
We propose projecting the problem to a lower dimensional space through predicting the difference in accuracy of a pair of similar networks.
arXiv Detail & Related papers (2024-11-21T02:43:32Z) - Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets [55.2118691522524]
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture.
We propose a distillation-aware meta accuracy prediction model, DaSS (Distillation-aware Student Search), which can predict a given architecture's final performances on a dataset.
arXiv Detail & Related papers (2023-05-26T14:00:35Z) - Are Neural Architecture Search Benchmarks Well Designed? A Deeper Look
Into Operation Importance [5.065947993017157]
We conduct an empirical analysis of the widely used NAS-Bench-101, NAS-Bench-201 and TransNAS-Bench-101 benchmarks.
We found that only a subset of the operation pool is required to generate architectures close to the upper-bound of the performance range.
We consistently found convolution layers to have the highest impact on the architecture's performance.
arXiv Detail & Related papers (2023-03-29T18:03:28Z) - When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search [18.9676056830197]
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-10-12T01:13:18Z) - Efficient Neural Architecture Search for End-to-end Speech Recognition
via Straight-Through Gradients [17.501966450686282]
We develop an efficient Neural Architecture Search (NAS) method via Straight-Through (ST) gradients, called ST-NAS.
Experiments over the widely benchmarked 80-hour WSJ and 300-hour Switchboard datasets show that ST-NAS induced architectures significantly outperform the human-designed architecture across the two datasets.
Strengths of ST-NAS such as architecture transferability and low computation cost in memory and time are also reported.
arXiv Detail & Related papers (2020-11-11T09:18:58Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - DSNAS: Direct Neural Architecture Search without Parameter Retraining [112.02966105995641]
We propose a new problem definition for NAS, task-specific end-to-end, based on this observation.
We propose DSNAS, an efficient differentiable NAS framework that simultaneously optimize architecture and parameters with a low-biased Monte Carlo estimate.
DSNAS successfully discovers networks with comparable accuracy (74.4%) on ImageNet in 420 GPU hours, reducing the total time by more than 34%.
arXiv Detail & Related papers (2020-02-21T04:41:47Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.