FOX-NAS: Fast, On-device and Explainable Neural Architecture Search
- URL: http://arxiv.org/abs/2108.08189v1
- Date: Sat, 14 Aug 2021 16:23:13 GMT
- Title: FOX-NAS: Fast, On-device and Explainable Neural Architecture Search
- Authors: Chia-Hsiang Liu, Yu-Shin Han, Yuan-Yao Sung, Yi Lee, Hung-Yueh Chiang,
Kai-Chiang Wu
- Abstract summary: One-Shot approaches typically require a supernet with weight sharing and predictors that predict the performance of architecture.
Our method is quantization-friendly and can be efficiently deployed to the edge.
Fox-NAS is the 3rd place winner of the 2020 Low-Power Computer Vision Challenge (LPCVC), DSP classification track.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search can discover neural networks with good
performance, and One-Shot approaches are prevalent. One-Shot approaches
typically require a supernet with weight sharing and predictors that predict
the performance of architecture. However, the previous methods take much time
to generate performance predictors thus are inefficient. To this end, we
propose FOX-NAS that consists of fast and explainable predictors based on
simulated annealing and multivariate regression. Our method is
quantization-friendly and can be efficiently deployed to the edge. The
experiments on different hardware show that FOX-NAS models outperform some
other popular neural network architectures. For example, FOX-NAS matches
MobileNetV2 and EfficientNet-Lite0 accuracy with 240% and 40% less latency on
the edge CPU. FOX-NAS is the 3rd place winner of the 2020 Low-Power Computer
Vision Challenge (LPCVC), DSP classification track. See all evaluation results
at https://lpcv.ai/competitions/2020. Search code and pre-trained models are
released at https://github.com/great8nctu/FOX-NAS.
Related papers
- SalNAS: Efficient Saliency-prediction Neural Architecture Search with self-knowledge distillation [7.625269122161064]
Recent advancements in deep convolutional neural networks have significantly improved the performance of saliency prediction.
We propose a new Neural Architecture Search framework for saliency prediction with two contributions.
By utilizing Self-KD, SalNAS outperforms other state-of-the-art saliency prediction models in most evaluation rubrics.
arXiv Detail & Related papers (2024-07-29T14:48:34Z) - PRE-NAS: Predictor-assisted Evolutionary Neural Architecture Search [34.06028035262884]
We propose a novel evolutionary-based NAS strategy, Predictor-assisted E-NAS (PRE-NAS)
PRE-NAS leverages new evolutionary search strategies and integrates high-fidelity weight inheritance over generations.
Experiments on NAS-Bench-201 and DARTS search spaces show that PRE-NAS can outperform state-of-the-art NAS methods.
arXiv Detail & Related papers (2022-04-27T06:40:39Z) - Neural Architecture Search on ImageNet in Four GPU Hours: A
Theoretically Inspired Perspective [88.39981851247727]
We propose a novel framework called training-free neural architecture search (TE-NAS)
TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space.
We show that: (1) these two measurements imply the trainability and expressivity of a neural network; (2) they strongly correlate with the network's test accuracy.
arXiv Detail & Related papers (2021-02-23T07:50:44Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition [43.97052733871721]
A key component in Neural Architecture Search (NAS) is an accuracy predictor which asserts the accuracy of a queried architecture.
We propose to replace the accuracy predictor with a novel model-complexity index named Zen-score.
Instead of predicting model accuracy, Zen-score directly asserts the model complexity of a network without training its parameters.
arXiv Detail & Related papers (2021-02-01T18:53:40Z) - S3NAS: Fast NPU-aware Neural Architecture Search Methodology [2.607400740040335]
We present a fast NPU-aware NAS methodology, called S3NAS, to find a CNN architecture with higher accuracy than the existing ones.
We are able to find a network in 3 hours using TPUv3, which shows 82.72% top-1 accuracy on ImageNet with 11.66 ms latency.
arXiv Detail & Related papers (2020-09-04T04:45:50Z) - TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained
Differentiable Neural Architecture Search [85.96350089047398]
We propose Three-Freedom NAS (TF-NAS) to achieve both good classification accuracy and precise latency constraint.
Experiments on ImageNet demonstrate the effectiveness of TF-NAS. Particularly, our searched TF-NAS-A obtains 76.9% top-1 accuracy, achieving state-of-the-art results with less latency.
arXiv Detail & Related papers (2020-08-12T13:44:20Z) - Accuracy Prediction with Non-neural Model for Neural Architecture Search [185.0651567642238]
We study an alternative approach which uses non-neural model for accuracy prediction.
We leverage gradient boosting decision tree (GBDT) as the predictor for Neural architecture search (NAS)
Experiments on NASBench-101 and ImageNet demonstrate the effectiveness of using GBDT as predictor for NAS.
arXiv Detail & Related papers (2020-07-09T13:28:49Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - FBNetV2: Differentiable Neural Architecture Search for Spatial and
Channel Dimensions [70.59851564292828]
Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks.
We propose a memory and computationally efficient DNAS variant: DMaskingNAS.
This algorithm expands the search space by up to $1014times$ over conventional DNAS.
arXiv Detail & Related papers (2020-04-12T08:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.