TinyReptile: TinyML with Federated Meta-Learning
- URL: http://arxiv.org/abs/2304.05201v1
- Date: Tue, 11 Apr 2023 13:11:10 GMT
- Title: TinyReptile: TinyML with Federated Meta-Learning
- Authors: Haoyu Ren, Darko Anicic, Thomas A. Runkler
- Abstract summary: We propose TinyReptile, a simple but efficient algorithm inspired by meta-learning and online learning.
We demonstrate TinyReptile on Raspberry Pi 4 and Cortex-M4 MCU with only 256-KB RAM.
- Score: 9.618821589196624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tiny machine learning (TinyML) is a rapidly growing field aiming to
democratize machine learning (ML) for resource-constrained microcontrollers
(MCUs). Given the pervasiveness of these tiny devices, it is inherent to ask
whether TinyML applications can benefit from aggregating their knowledge.
Federated learning (FL) enables decentralized agents to jointly learn a global
model without sharing sensitive local data. However, a common global model may
not work for all devices due to the complexity of the actual deployment
environment and the heterogeneity of the data available on each device. In
addition, the deployment of TinyML hardware has significant computational and
communication constraints, which traditional ML fails to address. Considering
these challenges, we propose TinyReptile, a simple but efficient algorithm
inspired by meta-learning and online learning, to collaboratively learn a solid
initialization for a neural network (NN) across tiny devices that can be
quickly adapted to a new device with respect to its data. We demonstrate
TinyReptile on Raspberry Pi 4 and Cortex-M4 MCU with only 256-KB RAM. The
evaluations on various TinyML use cases confirm a resource reduction and
training time saving by at least two factors compared with baseline algorithms
with comparable performance.
Related papers
- TinySV: Speaker Verification in TinyML with On-device Learning [2.356162747014486]
This paper introduces a new type of adaptive TinyML solution that can be used in tasks, such as the presented textitTiny Speaker Verification (TinySV)
The proposed TinySV solution relies on a two-layer hierarchical TinyML solution comprising Keyword Spotting and Adaptive Speaker Verification module.
We evaluate the effectiveness and efficiency of the proposed TinySV solution on a dataset collected expressly for the task and tested the proposed solution on a real-world IoT device.
arXiv Detail & Related papers (2024-06-03T17:27:40Z) - On-device Online Learning and Semantic Management of TinyML Systems [8.183732025472766]
This study aims to bridge the gap between prototyping single TinyML models and developing reliable TinyML systems in production.
We propose online learning to enable training on constrained devices, adapting local models towards the latest field conditions.
We present semantic management for the joint management of models and devices at scale.
arXiv Detail & Related papers (2024-05-13T10:03:34Z) - Distributed Inference and Fine-tuning of Large Language Models Over The
Internet [91.00270820533272]
Large language models (LLMs) are useful in many NLP tasks and become more capable with size.
These models require high-end hardware, making them inaccessible to most researchers.
We develop fault-tolerant inference algorithms and load-balancing protocols that automatically assign devices to maximize the total system throughput.
arXiv Detail & Related papers (2023-12-13T18:52:49Z) - Deformable Mixer Transformer with Gating for Multi-Task Learning of
Dense Prediction [126.34551436845133]
CNNs and Transformers have their own advantages and both have been widely used for dense prediction in multi-task learning (MTL)
We present a novel MTL model by combining both merits of deformable CNN and query-based Transformer with shared gating for multi-task learning of dense prediction.
arXiv Detail & Related papers (2023-08-10T17:37:49Z) - TinyMetaFed: Efficient Federated Meta-Learning for TinyML [8.940139322528829]
We introduce TinyMetaFed, a model-agnostic meta-learning framework suitable for TinyML.
TinyMetaFed facilitates collaborative training of a neural network.
It offers communication savings and privacy protection through partial local reconstruction and Top-P% selective communication.
arXiv Detail & Related papers (2023-07-13T15:39:26Z) - FedYolo: Augmenting Federated Learning with Pretrained Transformers [61.56476056444933]
In this work, we investigate pretrained transformers (PTF) to achieve on-device learning goals.
We show that larger scale shrinks the accuracy gaps between alternative approaches and improves robustness.
Finally, it enables clients to solve multiple unrelated tasks simultaneously using a single PTF.
arXiv Detail & Related papers (2023-07-10T21:08:52Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - How to Manage Tiny Machine Learning at Scale: An Industrial Perspective [5.384059021764428]
Tiny machine learning (TinyML) has gained widespread popularity where machine learning (ML) is democratized on ubiquitous microcontrollers.
TinyML models have been developed with different structures and are often distributed without a clear understanding of their working principles.
We propose a framework using Semantic Web technologies to enable the joint management of TinyML models and IoT devices at scale.
arXiv Detail & Related papers (2022-02-18T10:36:11Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z) - TinyOL: TinyML with Online-Learning on Microcontrollers [7.172671995820974]
Tiny machine learning (TinyML) is committed to democratizing deep learning for all-pervasive microcontrollers (MCUs)
Current TinyML solutions are based on batch/offline settings and support only the neural network's inference on MCUs.
We propose a novel system called TinyOL (TinyML with Online-Learning), which enables incremental on-device training on streaming data.
arXiv Detail & Related papers (2021-03-15T11:39:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.