How to Manage Tiny Machine Learning at Scale: An Industrial Perspective
- URL: http://arxiv.org/abs/2202.09113v1
- Date: Fri, 18 Feb 2022 10:36:11 GMT
- Title: How to Manage Tiny Machine Learning at Scale: An Industrial Perspective
- Authors: Haoyu Ren, Darko Anicic, Thomas Runkler
- Abstract summary: Tiny machine learning (TinyML) has gained widespread popularity where machine learning (ML) is democratized on ubiquitous microcontrollers.
TinyML models have been developed with different structures and are often distributed without a clear understanding of their working principles.
We propose a framework using Semantic Web technologies to enable the joint management of TinyML models and IoT devices at scale.
- Score: 5.384059021764428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tiny machine learning (TinyML) has gained widespread popularity where machine
learning (ML) is democratized on ubiquitous microcontrollers, processing sensor
data everywhere in real-time. To manage TinyML in the industry, where mass
deployment happens, we consider the hardware and software constraints, ranging
from available onboard sensors and memory size to ML-model architectures and
runtime platforms. However, Internet of Things (IoT) devices are typically
tailored to specific tasks and are subject to heterogeneity and limited
resources. Moreover, TinyML models have been developed with different
structures and are often distributed without a clear understanding of their
working principles, leading to a fragmented ecosystem. Considering these
challenges, we propose a framework using Semantic Web technologies to enable
the joint management of TinyML models and IoT devices at scale, from modeling
information to discovering possible combinations and benchmarking, and
eventually facilitate TinyML component exchange and reuse. We present an
ontology (semantic schema) for neural network models aligned with the World
Wide Web Consortium (W3C) Thing Description, which semantically describes IoT
devices. Furthermore, a Knowledge Graph of 23 publicly available ML models and
six IoT devices were used to demonstrate our concept in three case studies, and
we shared the code and examples to enhance reproducibility:
https://github.com/Haoyu-R/How-to-Manage-TinyML-at-Scale
Related papers
- IoT-LM: Large Multisensory Language Models for the Internet of Things [70.74131118309967]
IoT ecosystem provides rich source of real-world modalities such as motion, thermal, geolocation, imaging, depth, sensors, and audio.
Machine learning presents a rich opportunity to automatically process IoT data at scale.
We introduce IoT-LM, an open-source large multisensory language model tailored for the IoT ecosystem.
arXiv Detail & Related papers (2024-07-13T08:20:37Z) - MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases [81.70591346986582]
We introduce MobileAIBench, a benchmarking framework for evaluating Large Language Models (LLMs) and Large Multimodal Models (LMMs) on mobile devices.
MobileAIBench assesses models across different sizes, quantization levels, and tasks, measuring latency and resource consumption on real devices.
arXiv Detail & Related papers (2024-06-12T22:58:12Z) - On-device Online Learning and Semantic Management of TinyML Systems [8.183732025472766]
This study aims to bridge the gap between prototyping single TinyML models and developing reliable TinyML systems in production.
We propose online learning to enable training on constrained devices, adapting local models towards the latest field conditions.
We present semantic management for the joint management of models and devices at scale.
arXiv Detail & Related papers (2024-05-13T10:03:34Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification [68.19713459228369]
We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
arXiv Detail & Related papers (2023-05-21T11:20:49Z) - TinyReptile: TinyML with Federated Meta-Learning [9.618821589196624]
We propose TinyReptile, a simple but efficient algorithm inspired by meta-learning and online learning.
We demonstrate TinyReptile on Raspberry Pi 4 and Cortex-M4 MCU with only 256-KB RAM.
arXiv Detail & Related papers (2023-04-11T13:11:10Z) - A review of TinyML [0.0]
The TinyML concept for embedded machine learning attempts to push such diversity from usual high-end approaches to low-end applications.
TinyML is a rapidly expanding interdisciplinary topic at the convergence of machine learning, software, and hardware.
This paper explores how TinyML can benefit a few specific industrial fields, its obstacles, and its future scope.
arXiv Detail & Related papers (2022-11-05T06:02:08Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - SeLoC-ML: Semantic Low-Code Engineering for Machine Learning
Applications in Industrial IoT [9.477629856092218]
This paper presents a framework called Semantic Low-Code Engineering for ML Applications (SeLoC-ML)
SeLoC-ML enables non-experts to model, discover, reuse, and matchmake ML models and devices at scale.
Developers can benefit from semantic application templates, called recipes, to fast prototype end-user applications.
arXiv Detail & Related papers (2022-07-18T13:06:21Z) - Tiny Robot Learning: Challenges and Directions for Machine Learning in
Resource-Constrained Robots [57.27442333662654]
Machine learning (ML) has become a pervasive tool across computing systems.
Tiny robot learning is the deployment of ML on resource-constrained low-cost autonomous robots.
Tiny robot learning is subject to challenges from size, weight, area, and power (SWAP) constraints.
This paper gives a brief survey of the tiny robot learning space, elaborates on key challenges, and proposes promising opportunities for future work in ML system design.
arXiv Detail & Related papers (2022-05-11T19:36:15Z) - Software Engineering Approaches for TinyML based IoT Embedded Vision: A
Systematic Literature Review [0.0]
Internet of Things (IoT) has joined forces with Machine Learning (ML) to embed deep intelligence at the far edge.
TinyML (Tiny Machine Learning) has enabled the deployment of ML models for embedded vision on extremely lean edge hardware.
TinyML powered embedded vision applications are still in a nascent stage, and they are just starting to scale to widespread real-world IoT deployment.
arXiv Detail & Related papers (2022-04-19T07:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.