MEMA Runtime Framework: Minimizing External Memory Accesses for TinyML
on Microcontrollers
- URL: http://arxiv.org/abs/2304.05544v1
- Date: Wed, 12 Apr 2023 00:27:11 GMT
- Title: MEMA Runtime Framework: Minimizing External Memory Accesses for TinyML
on Microcontrollers
- Authors: Andrew Sabot, Vikas Natesh, H.T. Kung, Wei-Te Ting
- Abstract summary: We present the MEMA framework for efficient inference runtimes that minimize external memory accesses for matrix multiplication on TinyML systems.
We compare the performance of runtimes derived from MEMA to existing state-of-the-art libraries on ARM-based TinyML systems.
- Score: 3.1823074562424756
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present the MEMA framework for the easy and quick derivation of efficient
inference runtimes that minimize external memory accesses for matrix
multiplication on TinyML systems. The framework accounts for hardware resource
constraints and problem sizes in analytically determining optimized schedules
and kernels that minimize memory accesses. MEMA provides a solution to a
well-known problem in the current practice, that is, optimal schedules tend to
be found only through a time consuming and heuristic search of a large
scheduling space. We compare the performance of runtimes derived from MEMA to
existing state-of-the-art libraries on ARM-based TinyML systems. For example,
for neural network benchmarks on the ARM Cortex-M4, we achieve up to a 1.8x
speedup and 44% energy reduction over CMSIS-NN.
Related papers
- LiVOS: Light Video Object Segmentation with Gated Linear Matching [116.58237547253935]
LiVOS is a lightweight memory network that employs linear matching via linear attention.
For longer and higher-resolution videos, it matched STM-based methods with 53% less GPU memory and supports 4096p inference on a 32G consumer-grade GPU.
arXiv Detail & Related papers (2024-11-05T05:36:17Z) - MicroFlow: An Efficient Rust-Based Inference Engine for TinyML [1.8902208722501446]
MicroFlow is an open-source framework for the deployment of Neural Networks (NNs) on embedded systems using the Rust programming language.
It is able to use less Flash and RAM memory than other state-of-the-art solutions for deploying NN reference models.
It can also achieve faster inference compared to existing engines on medium-size NNs, and similar performance on bigger ones.
arXiv Detail & Related papers (2024-09-28T18:34:27Z) - Accelerating TinyML Inference on Microcontrollers through Approximate Kernels [3.566060656925169]
In this work, we combine approximate computing and software kernel design to accelerate the inference of approximate CNN models on microcontrollers.
Our evaluation on an STM32-Nucleo board and 2 popular CNNs trained on the CIFAR-10 dataset shows that, compared to state-of-the-art exact inference, our solutions can feature on average 21% latency reduction.
arXiv Detail & Related papers (2024-09-25T11:10:33Z) - MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models [58.3342517278868]
This paper describes the design of Mixed-precision AutoRegressive LINear kernels.
It shows that batchsizes up to 16-32 can be supported with close to maximum ($4times$) quantization speedup.
MarLIN accomplishes this via a combination of techniques, such as asynchronous memory access, complex task scheduling and pipelining.
arXiv Detail & Related papers (2024-08-21T16:10:41Z) - Fast Matrix Multiplications for Lookup Table-Quantized LLMs [58.11584672945781]
FLUTE is a flexible lookup table engine for LUT-quantized LLMs.
At batch sizes 32 and quantization group size of 128, the FLUTE kernel can be 2-4x faster than existing GEMM kernels.
arXiv Detail & Related papers (2024-07-15T17:55:42Z) - MLonMCU: TinyML Benchmarking with Fast Retargeting [1.4319942396517]
It is non-trivial to choose the optimal combination of frameworks and targets for a given application.
A tool called MLonMCU is proposed in this paper and demonstrated by benchmarking the state-of-the-art TinyML frameworks TFLite for Microcontrollers and TVM effortlessly.
arXiv Detail & Related papers (2023-06-15T08:44:35Z) - Pex: Memory-efficient Microcontroller Deep Learning through Partial
Execution [11.336229510791481]
We discuss a novel execution paradigm for microcontroller deep learning.
It modifies the execution of neural networks to avoid materialising full buffers in memory.
This is achieved by exploiting the properties of operators, which can consume/produce a fraction of their input/output at a time.
arXiv Detail & Related papers (2022-11-30T18:47:30Z) - MinUn: Accurate ML Inference on Microcontrollers [2.2638536653874195]
Running machine learning inference on tiny devices, known as TinyML, is an emerging research area.
We describe MinUn, the first TinyML framework that holistically addresses these issues to generate efficient code for ARM microcontrollers.
arXiv Detail & Related papers (2022-10-29T10:16:12Z) - NumS: Scalable Array Programming for the Cloud [82.827921577004]
We present NumS, an array programming library which optimize NumPy-like expressions on task-based distributed systems.
This is achieved through a novel scheduler called Load Simulated Hierarchical Scheduling (LSHS)
We show that LSHS enhances performance on Ray by decreasing network load by a factor of 2x, requiring 4x less memory, and reducing execution time by 10x on the logistic regression problem.
arXiv Detail & Related papers (2022-06-28T20:13:40Z) - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning [72.80896338009579]
We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs.
We propose a generic patch-by-patch inference scheduling, which significantly cuts down the peak memory.
We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2.
arXiv Detail & Related papers (2021-10-28T17:58:45Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.