MicroNAS: Memory and Latency Constrained Hardware-Aware Neural
Architecture Search for Time Series Classification on Microcontrollers
- URL: http://arxiv.org/abs/2310.18384v2
- Date: Sun, 4 Feb 2024 09:54:06 GMT
- Title: MicroNAS: Memory and Latency Constrained Hardware-Aware Neural
Architecture Search for Time Series Classification on Microcontrollers
- Authors: Tobias King, Yexu Zhou, Tobias R\"oddiger, Michael Beigl
- Abstract summary: We adapt the concept of differentiable neural architecture search (DNAS) to solve the time-series classification problem on resource-constrained microcontrollers (MCUs)
We introduce MicroNAS, a domain-specific HW-NAS system integration of DNAS, Lookup Tables, dynamic convolutions and a novel search space specifically designed for time-series classification on MCUs.
Our studies on different MCUs and standard benchmark datasets demonstrate that MicroNAS finds MCU-tailored architectures that achieve performance (F1-score) near to state-of-the-art desktop models.
- Score: 3.0723404270319685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing domain specific neural networks is a time-consuming, error-prone,
and expensive task. Neural Architecture Search (NAS) exists to simplify
domain-specific model development but there is a gap in the literature for time
series classification on microcontrollers. Therefore, we adapt the concept of
differentiable neural architecture search (DNAS) to solve the time-series
classification problem on resource-constrained microcontrollers (MCUs). We
introduce MicroNAS, a domain-specific HW-NAS system integration of DNAS,
Latency Lookup Tables, dynamic convolutions and a novel search space
specifically designed for time-series classification on MCUs. The resulting
system is hardware-aware and can generate neural network architectures that
satisfy user-defined limits on the execution latency and peak memory
consumption. Our extensive studies on different MCUs and standard benchmark
datasets demonstrate that MicroNAS finds MCU-tailored architectures that
achieve performance (F1-score) near to state-of-the-art desktop models. We also
show that our approach is superior in adhering to memory and latency
constraints compared to domain-independent NAS baselines such as DARTS.
Related papers
- MONAS: Efficient Zero-Shot Neural Architecture Search for MCUs [5.321424657585365]
MONAS is a novel zero-shot NAS framework specifically designed for microcontrollers (MCUs) in edge computing.
MONAS achieves up to a 1104x improvement in search efficiency over previous work targeting MCUs.
MONAS can discover CNN models with over 3.23x faster inference on MCUs while maintaining similar accuracy compared to more general NAS approaches.
arXiv Detail & Related papers (2024-08-26T10:24:45Z) - MicroNAS: Zero-Shot Neural Architecture Search for MCUs [5.813274149871141]
Neural Architecture Search (NAS) effectively discovers new Convolutional Neural Network (CNN) architectures.
We propose MicroNAS, a hardware-aware zero-shot NAS framework for microcontroller units (MCUs) in edge computing.
Compared to previous works, MicroNAS achieves up to 1104x improvement in search efficiency and discovers models with over 3.23x faster MCU inference.
arXiv Detail & Related papers (2024-01-17T06:17:42Z) - Search-time Efficient Device Constraints-Aware Neural Architecture
Search [6.527454079441765]
Deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive.
We automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS)
We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints.
arXiv Detail & Related papers (2023-07-10T09:52:28Z) - Generalization Properties of NAS under Activation and Skip Connection
Search [66.8386847112332]
We study the generalization properties of Neural Architecture Search (NAS) under a unifying framework.
We derive the lower (and upper) bounds of the minimum eigenvalue of the Neural Tangent Kernel (NTK) under the (in)finite-width regime.
We show how the derived results can guide NAS to select the top-performing architectures, even in the case without training.
arXiv Detail & Related papers (2022-09-15T12:11:41Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - Towards Less Constrained Macro-Neural Architecture Search [2.685668802278155]
Neural Architecture Search (NAS) networks achieve state-of-the-art performance in a variety of tasks.
Most NAS methods rely heavily on human-defined assumptions that constrain the search.
We present experiments showing that LCMNAS generates state-of-the-art architectures from scratch with minimal GPU computation.
arXiv Detail & Related papers (2022-03-10T17:53:03Z) - HyperSegNAS: Bridging One-Shot Neural Architecture Search with 3D
Medical Image Segmentation using HyperNet [51.60655410423093]
We introduce HyperSegNAS to enable one-shot Neural Architecture Search (NAS) for medical image segmentation.
We show that HyperSegNAS yields better performing and more intuitive architectures compared to the previous state-of-the-art (SOTA) segmentation networks.
Our method is evaluated on public datasets from the Medical Decathlon (MSD) challenge, and achieves SOTA performances.
arXiv Detail & Related papers (2021-12-20T16:21:09Z) - Neural Architecture Search of SPD Manifold Networks [79.45110063435617]
We propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
We first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design.
We exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search.
arXiv Detail & Related papers (2020-10-27T18:08:57Z) - MicroNets: Neural Network Architectures for Deploying TinyML
Applications on Commodity Microcontrollers [18.662026553041937]
Machine learning on resource constrained microcontrollers (MCUs) promises to drastically expand the application space of the Internet of Things (IoT)
TinyML presents severe technical challenges, as deep neural network inference demands a large compute and memory budget.
neural architecture search (NAS) promises to help design accurate ML models that meet the tight MCU memory, latency and energy constraints.
arXiv Detail & Related papers (2020-10-21T19:39:39Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - LC-NAS: Latency Constrained Neural Architecture Search for Point Cloud
Networks [73.78551758828294]
LC-NAS is able to find state-of-the-art architectures for point cloud classification with minimal computational cost.
We show how our searched architectures achieve any desired latency with a reasonably low drop in accuracy.
arXiv Detail & Related papers (2020-08-24T10:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.