Lightweight Neural Architecture Search for Temporal Convolutional
Networks at the Edge
- URL: http://arxiv.org/abs/2301.10281v1
- Date: Tue, 24 Jan 2023 19:47:40 GMT
- Title: Lightweight Neural Architecture Search for Temporal Convolutional
Networks at the Edge
- Authors: Matteo Risso, Alessio Burrello, Francesco Conti, Lorenzo Lamberti,
Yukai Chen, Luca Benini, Enrico Macii, Massimo Poncino, Daniele Jahier
Pagliari
- Abstract summary: This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing.
We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs.
We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals.
- Score: 21.72253397805102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Architecture Search (NAS) is quickly becoming the go-to approach to
optimize the structure of Deep Learning (DL) models for complex tasks such as
Image Classification or Object Detection. However, many other relevant
applications of DL, especially at the edge, are based on time-series processing
and require models with unique features, for which NAS is less explored. This
work focuses in particular on Temporal Convolutional Networks (TCNs), a
convolutional model for time-series processing that has recently emerged as a
promising alternative to more complex recurrent architectures. We propose the
first NAS tool that explicitly targets the optimization of the most peculiar
architectural parameters of TCNs, namely dilation, receptive-field and number
of features in each layer. The proposed approach searches for networks that
offer good trade-offs between accuracy and number of parameters/operations,
enabling an efficient deployment on embedded platforms. We test the proposed
NAS on four real-world, edge-relevant tasks, involving audio and bio-signals.
Results show that, starting from a single seed network, our method is capable
of obtaining a rich collection of Pareto optimal architectures, among which we
obtain models with the same accuracy as the seed, and 15.9-152x fewer
parameters. Compared to three state-of-the-art NAS tools, ProxylessNAS,
MorphNet and FBNetV2, our method explores a larger search space for TCNs (up to
10^12x) and obtains superior solutions, while requiring low GPU memory and
search time. We deploy our NAS outputs on two distinct edge devices, the
multicore GreenWaves Technology GAP8 IoT processor and the single-core
STMicroelectronics STM32H7 microcontroller. With respect to the
state-of-the-art hand-tuned models, we reduce latency and energy of up to 5.5x
and 3.8x on the two targets respectively, without any accuracy loss.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models [56.584561770857306]
We propose a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG.
Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them.
We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS.
When integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset.
arXiv Detail & Related papers (2023-05-26T13:58:18Z) - HQNAS: Auto CNN deployment framework for joint quantization and
architecture search [30.45926484863791]
We propose a novel neural network design framework called Hardware-aware Quantized Neural Architecture Search(HQNAS)
It takes only 4 GPU hours to discover an outstanding NN policy on CIFAR10.
It also takes only %10 GPU time to generate a comparable model on Imagenet.
arXiv Detail & Related papers (2022-10-16T08:32:18Z) - $\alpha$NAS: Neural Architecture Search using Property Guided Synthesis [1.2746672439030722]
We develop techniques that enable efficient neural architecture search (NAS) in a significantly larger design space.
Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance.
We implement our approach, $alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties.
arXiv Detail & Related papers (2022-05-08T21:48:03Z) - U-Boost NAS: Utilization-Boosted Differentiable Neural Architecture
Search [50.33956216274694]
optimizing resource utilization in target platforms is key to achieving high performance during DNN inference.
We propose a novel hardware-aware NAS framework that does not only optimize for task accuracy and inference latency, but also for resource utilization.
We achieve 2.8 - 4x speedup for DNN inference compared to prior hardware-aware NAS methods.
arXiv Detail & Related papers (2022-03-23T13:44:15Z) - TND-NAS: Towards Non-differentiable Objectives in Progressive
Differentiable NAS Framework [6.895590095853327]
Differentiable architecture search has gradually become the mainstream research topic in the field of Neural Architecture Search (NAS)
Recent differentiable NAS also aims at further improving the search performance and reducing the GPU-memory consumption.
We propose the TND-NAS, which is with the merits of the high efficiency in differentiable NAS framework and the compatibility among non-differentiable metrics in Multi-objective NAS.
arXiv Detail & Related papers (2021-11-06T14:19:36Z) - FLASH: Fast Neural Architecture Search with Hardware Optimization [7.263481020106725]
Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs)
This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform.
arXiv Detail & Related papers (2021-08-01T23:46:48Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.