TinyOL: TinyML with Online-Learning on Microcontrollers
- URL: http://arxiv.org/abs/2103.08295v2
- Date: Tue, 16 Mar 2021 08:47:33 GMT
- Title: TinyOL: TinyML with Online-Learning on Microcontrollers
- Authors: Haoyu Ren, Darko Anicic and Thomas Runkler
- Abstract summary: Tiny machine learning (TinyML) is committed to democratizing deep learning for all-pervasive microcontrollers (MCUs)
Current TinyML solutions are based on batch/offline settings and support only the neural network's inference on MCUs.
We propose a novel system called TinyOL (TinyML with Online-Learning), which enables incremental on-device training on streaming data.
- Score: 7.172671995820974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tiny machine learning (TinyML) is a fast-growing research area committed to
democratizing deep learning for all-pervasive microcontrollers (MCUs).
Challenged by the constraints on power, memory, and computation, TinyML has
achieved significant advancement in the last few years. However, the current
TinyML solutions are based on batch/offline settings and support only the
neural network's inference on MCUs. The neural network is first trained using a
large amount of pre-collected data on a powerful machine and then flashed to
MCUs. This results in a static model, hard to adapt to new data, and impossible
to adjust for different scenarios, which impedes the flexibility of the
Internet of Things (IoT). To address these problems, we propose a novel system
called TinyOL (TinyML with Online-Learning), which enables incremental
on-device training on streaming data. TinyOL is based on the concept of online
learning and is suitable for constrained IoT devices. We experiment TinyOL
under supervised and unsupervised setups using an autoencoder neural network.
Finally, we report the performance of the proposed solution and show its
effectiveness and feasibility.
Related papers
- Training on the Fly: On-device Self-supervised Learning aboard Nano-drones within 20 mW [52.280742520586756]
Miniaturized cyber-physical systems (CPSes) powered by tiny machine learning (TinyML), such as nano-drones, are becoming an increasingly attractive technology.
Simple electronics make these CPSes inexpensive, but strongly limit the computational, memory, and sensing resources available on board.
We present a novel on-device fine-tuning approach that relies only on the limited ultra-low power resources available aboard nano-drones.
arXiv Detail & Related papers (2024-08-06T13:11:36Z) - TinySV: Speaker Verification in TinyML with On-device Learning [2.356162747014486]
This paper introduces a new type of adaptive TinyML solution that can be used in tasks, such as the presented textitTiny Speaker Verification (TinySV)
The proposed TinySV solution relies on a two-layer hierarchical TinyML solution comprising Keyword Spotting and Adaptive Speaker Verification module.
We evaluate the effectiveness and efficiency of the proposed TinySV solution on a dataset collected expressly for the task and tested the proposed solution on a real-world IoT device.
arXiv Detail & Related papers (2024-06-03T17:27:40Z) - Tiny Machine Learning: Progress and Futures [24.76599651516217]
Tiny Machine Learning (TinyML) is a new frontier of machine learning.
TinyML is challenging due to hardware constraints.
We will first discuss the definition, challenges, and applications of TinyML.
arXiv Detail & Related papers (2024-03-28T00:34:56Z) - TinyMetaFed: Efficient Federated Meta-Learning for TinyML [8.940139322528829]
We introduce TinyMetaFed, a model-agnostic meta-learning framework suitable for TinyML.
TinyMetaFed facilitates collaborative training of a neural network.
It offers communication savings and privacy protection through partial local reconstruction and Top-P% selective communication.
arXiv Detail & Related papers (2023-07-13T15:39:26Z) - TinyReptile: TinyML with Federated Meta-Learning [9.618821589196624]
We propose TinyReptile, a simple but efficient algorithm inspired by meta-learning and online learning.
We demonstrate TinyReptile on Raspberry Pi 4 and Cortex-M4 MCU with only 256-KB RAM.
arXiv Detail & Related papers (2023-04-11T13:11:10Z) - A review of TinyML [0.0]
The TinyML concept for embedded machine learning attempts to push such diversity from usual high-end approaches to low-end applications.
TinyML is a rapidly expanding interdisciplinary topic at the convergence of machine learning, software, and hardware.
This paper explores how TinyML can benefit a few specific industrial fields, its obstacles, and its future scope.
arXiv Detail & Related papers (2022-11-05T06:02:08Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - MCUNet: Tiny Deep Learning on IoT Devices [62.752899523628066]
We propose a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine)
TinyNAS adopts a two-stage neural architecture search approach that first optimize the search space to fit the resource constraints, then specializes the network architecture in the optimized search space.
TinyEngine adapts the memory scheduling according to the overall network topology rather than layer-wise optimization, reducing the memory usage by 4.8x.
arXiv Detail & Related papers (2020-07-20T17:59:01Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.