A Comprehensive Survey on Hardware-Aware Neural Architecture Search
- URL: http://arxiv.org/abs/2101.09336v1
- Date: Fri, 22 Jan 2021 21:13:46 GMT
- Title: A Comprehensive Survey on Hardware-Aware Neural Architecture Search
- Authors: Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar,
Martin Wistuba, Naigang Wang
- Abstract summary: Neural Architecture Search (NAS) methods have been growing in popularity.
NAS has been extensively studied in the past few years.
Applying NAS to real-world problems still poses significant challenges and is not widely practical.
One solution growing in popularity is to use multi-objective optimization algorithms in the NAS search strategy by taking into account execution latency, energy consumption, memory footprint, etc.
This kind of NAS, called hardware-aware NAS (HW-NAS), makes searching the most efficient architecture more complicated and opens several questions.
- Score: 6.23453131728063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architecture Search (NAS) methods have been growing in popularity.
These techniques have been fundamental to automate and speed up the time
consuming and error-prone process of synthesizing novel Deep Learning (DL)
architectures. NAS has been extensively studied in the past few years. Arguably
their most significant impact has been in image classification and object
detection tasks where the state of the art results have been obtained. Despite
the significant success achieved to date, applying NAS to real-world problems
still poses significant challenges and is not widely practical. In general, the
synthesized Convolution Neural Network (CNN) architectures are too complex to
be deployed in resource-limited platforms, such as IoT, mobile, and embedded
systems. One solution growing in popularity is to use multi-objective
optimization algorithms in the NAS search strategy by taking into account
execution latency, energy consumption, memory footprint, etc. This kind of NAS,
called hardware-aware NAS (HW-NAS), makes searching the most efficient
architecture more complicated and opens several questions.
In this survey, we provide a detailed review of existing HW-NAS research and
categorize them according to four key dimensions: the search space, the search
strategy, the acceleration technique, and the hardware cost estimation
strategies. We further discuss the challenges and limitations of existing
approaches and potential future directions. This is the first survey paper
focusing on hardware-aware NAS. We hope it serves as a valuable reference for
the various techniques and algorithms discussed and paves the road for future
research towards hardware-aware NAS.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - A Survey on Surrogate-assisted Efficient Neural Architecture Search [18.914781707473296]
Neural architecture search (NAS) has become increasingly popular in the deep learning community recently.
NAS is still laborious and time-consuming because a large number of performance estimations are required during the search process of NAS.
To solve the major limitation of NAS, improving the efficiency of NAS is essential in the design of NAS.
arXiv Detail & Related papers (2022-06-03T12:02:20Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - Meta-Learning of NAS for Few-shot Learning in Medical Image Applications [10.666687733540668]
Neural Architecture Search (NAS) has motivated various applications in medical imaging.
NAS requires the availability of large annotated data, considerable resources, and pre-defined tasks.
We introduce various NAS approaches in medical imaging with different applications such as classification, segmentation, detection, reconstruction.
arXiv Detail & Related papers (2022-03-16T21:21:51Z) - NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search [18.9676056830197]
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-10-12T01:13:18Z) - TransNAS-Bench-101: Improving Transferability and Generalizability of
Cross-Task Neural Architecture Search [98.22779489340869]
We propose TransNAS-Bench-101, a benchmark dataset containing network performance across seven vision tasks.
We explore two fundamentally different types of search space: cell-level search space and macro-level search space.
With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided.
arXiv Detail & Related papers (2021-05-25T12:15:21Z) - AdvantageNAS: Efficient Neural Architecture Search with Credit
Assignment [23.988393741948485]
We propose a novel search strategy for one-shot and sparse propagation NAS, namely AdvantageNAS.
AdvantageNAS is a gradient-based approach that improves the search efficiency by introducing credit assignment in gradient estimation for architecture updates.
Experiments on the NAS-Bench-201 and PTB dataset show that AdvantageNAS discovers an architecture with higher performance under a limited time budget.
arXiv Detail & Related papers (2020-12-11T05:45:03Z) - Revisiting Neural Architecture Search [0.0]
We propose a novel approach to search for the complete neural network without much human effort and is a step closer towards AutoML-nirvana.
Our method starts from a complete graph mapped to a neural network and searches for the connections and operations by balancing the exploration and exploitation of the search space.
arXiv Detail & Related papers (2020-10-12T13:57:30Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - A Comprehensive Survey of Neural Architecture Search: Challenges and
Solutions [48.76705090826339]
Neural Architecture Search (NAS) is a revolutionary algorithm, and the related research work is complicated and rich.
We provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms.
Besides, we conduct a detailed and comprehensive analysis, comparison, and summary of these works.
arXiv Detail & Related papers (2020-06-01T13:08:03Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.