TinyMetaFed: Efficient Federated Meta-Learning for TinyML
- URL: http://arxiv.org/abs/2307.06822v3
- Date: Thu, 28 Sep 2023 14:44:31 GMT
- Title: TinyMetaFed: Efficient Federated Meta-Learning for TinyML
- Authors: Haoyu Ren, Xue Li, Darko Anicic, Thomas A. Runkler
- Abstract summary: We introduce TinyMetaFed, a model-agnostic meta-learning framework suitable for TinyML.
TinyMetaFed facilitates collaborative training of a neural network.
It offers communication savings and privacy protection through partial local reconstruction and Top-P% selective communication.
- Score: 8.940139322528829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of Tiny Machine Learning (TinyML) has made substantial advancements
in democratizing machine learning on low-footprint devices, such as
microcontrollers. The prevalence of these miniature devices raises the question
of whether aggregating their knowledge can benefit TinyML applications.
Federated meta-learning is a promising answer to this question, as it addresses
the scarcity of labeled data and heterogeneous data distribution across devices
in the real world. However, deploying TinyML hardware faces unique resource
constraints, making existing methods impractical due to energy, privacy, and
communication limitations. We introduce TinyMetaFed, a model-agnostic
meta-learning framework suitable for TinyML. TinyMetaFed facilitates
collaborative training of a neural network initialization that can be quickly
fine-tuned on new devices. It offers communication savings and privacy
protection through partial local reconstruction and Top-P% selective
communication, computational efficiency via online learning, and robustness to
client heterogeneity through few-shot learning. The evaluations on three TinyML
use cases demonstrate that TinyMetaFed can significantly reduce energy
consumption and communication overhead, accelerate convergence, and stabilize
the training process.
Related papers
- Semantic Meta-Split Learning: A TinyML Scheme for Few-Shot Wireless Image Classification [50.28867343337997]
This work presents a TinyML-based semantic communication framework for few-shot wireless image classification.
We exploit split-learning to limit the computations performed by the end-users while ensuring privacy-preserving.
meta-learning overcomes data availability concerns and speeds up training by utilizing similarly trained tasks.
arXiv Detail & Related papers (2024-09-03T05:56:55Z) - Mini-Monkey: Alleviating the Semantic Sawtooth Effect for Lightweight MLLMs via Complementary Image Pyramid [87.09900996643516]
We introduce a Complementary Image Pyramid (CIP) to mitigate semantic discontinuity during high-resolution image processing.
We also introduce a Scale Compression Mechanism (SCM) to reduce the additional computational overhead by compressing the redundant visual tokens.
Our experiments demonstrate that CIP can consistently enhance the performance across diverse architectures.
arXiv Detail & Related papers (2024-08-04T13:55:58Z) - On-device Online Learning and Semantic Management of TinyML Systems [8.183732025472766]
This study aims to bridge the gap between prototyping single TinyML models and developing reliable TinyML systems in production.
We propose online learning to enable training on constrained devices, adapting local models towards the latest field conditions.
We present semantic management for the joint management of models and devices at scale.
arXiv Detail & Related papers (2024-05-13T10:03:34Z) - FedYolo: Augmenting Federated Learning with Pretrained Transformers [61.56476056444933]
In this work, we investigate pretrained transformers (PTF) to achieve on-device learning goals.
We show that larger scale shrinks the accuracy gaps between alternative approaches and improves robustness.
Finally, it enables clients to solve multiple unrelated tasks simultaneously using a single PTF.
arXiv Detail & Related papers (2023-07-10T21:08:52Z) - TinyReptile: TinyML with Federated Meta-Learning [9.618821589196624]
We propose TinyReptile, a simple but efficient algorithm inspired by meta-learning and online learning.
We demonstrate TinyReptile on Raspberry Pi 4 and Cortex-M4 MCU with only 256-KB RAM.
arXiv Detail & Related papers (2023-04-11T13:11:10Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Intelligence at the Extreme Edge: A Survey on Reformable TinyML [0.0]
We present a survey on reformable TinyML solutions with the proposal of a novel taxonomy for ease of separation.
We explore the workflow of TinyML and analyze the identified deployment schemes and the scarcely available benchmarking tools.
arXiv Detail & Related papers (2022-04-02T09:53:36Z) - How to Manage Tiny Machine Learning at Scale: An Industrial Perspective [5.384059021764428]
Tiny machine learning (TinyML) has gained widespread popularity where machine learning (ML) is democratized on ubiquitous microcontrollers.
TinyML models have been developed with different structures and are often distributed without a clear understanding of their working principles.
We propose a framework using Semantic Web technologies to enable the joint management of TinyML models and IoT devices at scale.
arXiv Detail & Related papers (2022-02-18T10:36:11Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z) - TinyOL: TinyML with Online-Learning on Microcontrollers [7.172671995820974]
Tiny machine learning (TinyML) is committed to democratizing deep learning for all-pervasive microcontrollers (MCUs)
Current TinyML solutions are based on batch/offline settings and support only the neural network's inference on MCUs.
We propose a novel system called TinyOL (TinyML with Online-Learning), which enables incremental on-device training on streaming data.
arXiv Detail & Related papers (2021-03-15T11:39:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.