One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
- URL: http://arxiv.org/abs/2111.01203v2
- Date: Wed, 3 Nov 2021 02:11:09 GMT
- Title: One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
- Authors: Bingqian Lu and Jianyi Yang and Weiwen Jiang and Yiyu Shi and Shaolei
Ren
- Abstract summary: Convolutional neural networks (CNNs) are used in numerous real-world applications such as vision-based autonomous driving and video content analysis.
To run CNN inference on various target devices, hardware-aware neural architecture search (NAS) is crucial.
We propose an efficient proxy adaptation technique to significantly boost the latency monotonicity.
- Score: 21.50120377137633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) are used in numerous real-world
applications such as vision-based autonomous driving and video content
analysis. To run CNN inference on various target devices, hardware-aware neural
architecture search (NAS) is crucial. A key requirement of efficient
hardware-aware NAS is the fast evaluation of inference latencies in order to
rank different architectures. While building a latency predictor for each
target device has been commonly used in state of the art, this is a very
time-consuming process, lacking scalability in the presence of extremely
diverse devices. In this work, we address the scalability challenge by
exploiting latency monotonicity -- the architecture latency rankings on
different devices are often correlated. When strong latency monotonicity
exists, we can re-use architectures searched for one proxy device on new target
devices, without losing optimality. In the absence of strong latency
monotonicity, we propose an efficient proxy adaptation technique to
significantly boost the latency monotonicity. Finally, we validate our approach
and conduct experiments with devices of different platforms on multiple
mainstream search spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201,
ProxylessNAS and FBNet. Our results highlight that, by using just one proxy
device, we can find almost the same Pareto-optimal architectures as the
existing per-device NAS, while avoiding the prohibitive cost of building a
latency predictor for each device. GitHub:
https://github.com/Ren-Research/OneProxy
Related papers
- PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Multi-objective Differentiable Neural Architecture Search [58.67218773054753]
We propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics.
Our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets.
arXiv Detail & Related papers (2024-02-28T10:09:04Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - MAPLE-Edge: A Runtime Latency Predictor for Edge Devices [80.01591186546793]
We propose MAPLE-Edge, an edge device-oriented extension of MAPLE, the state-of-the-art latency predictor for general purpose hardware.
Compared to MAPLE, MAPLE-Edge can describe the runtime and target device platform using a much smaller set of CPU performance counters.
We also demonstrate that unlike MAPLE which performs best when trained on a pool of devices sharing a common runtime, MAPLE-Edge can effectively generalize across runtimes.
arXiv Detail & Related papers (2022-04-27T14:00:48Z) - U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search [50.33956216274694]
optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
We propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization.
We achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods.
arXiv Detail & Related papers (2022-03-23T13:44:15Z) - FLASH: Fast Neural Architecture Search with Hardware Optimization [7.263481020106725]
Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs)
This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform.
arXiv Detail & Related papers (2021-08-01T23:46:48Z) - HELP: Hardware-Adaptive Efficient Latency Predictor for NAS via
Meta-Learning [43.751220068642624]
Hardware-adaptive Predictor (HELP) is a device-specific latency estimation problem as a meta-learning problem.
We introduce novel hardware embeddings to embed any devices considering them as black-box functions that output latencies, and meta-learn the hardware-adaptive latency predictor in a device-dependent manner.
We validate the proposed HELP for its latency estimation performance on unseen platforms, on which it achieves high estimation performance with as few as 10 measurement samples, outperforming all relevant baselines.
arXiv Detail & Related papers (2021-06-16T08:36:21Z) - Generalized Latency Performance Estimation for Once-For-All Neural
Architecture Search [0.0]
We introduce two generalizability strategies which include fine-tuning using a base model trained on a specific hardware and NAS search space.
We provide a family of latency prediction models that achieve over 50% lower RMSE loss as compared to ProxylessNAS.
arXiv Detail & Related papers (2021-01-04T00:48:09Z) - LC-NAS: Latency Constrained Neural Architecture Search for Point Cloud
Networks [73.78551758828294]
LC-NAS is able to find state-of-the-art architectures for point cloud classification with minimal computational cost.
We show how our searched architectures achieve any desired latency with a reasonably low drop in accuracy.
arXiv Detail & Related papers (2020-08-24T10:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.