LifeLearner: Hardware-Aware Meta Continual Learning System for Embedded
Computing Platforms
- URL: http://arxiv.org/abs/2311.11420v1
- Date: Sun, 19 Nov 2023 20:39:35 GMT
- Title: LifeLearner: Hardware-Aware Meta Continual Learning System for Embedded
Computing Platforms
- Authors: Young D. Kwon, Jagmohan Chauhan, Hong Jia, Stylianos I. Venieris, and
Cecilia Mascolo
- Abstract summary: Continual Learning (CL) allows applications such as user personalization and household robots to learn on the fly and adapt to context.
LifeLearner is a hardware-aware meta learning system that drastically optimize system resources.
LifeLearner achieves near-optimal CL performance, falling short by only 2.8% on accuracy compared to an Oracle baseline.
- Score: 17.031135153343502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Learning (CL) allows applications such as user personalization and
household robots to learn on the fly and adapt to context. This is an important
feature when context, actions, and users change. However, enabling CL on
resource-constrained embedded systems is challenging due to the limited labeled
data, memory, and computing capacity. In this paper, we propose LifeLearner, a
hardware-aware meta continual learning system that drastically optimizes system
resources (lower memory, latency, energy consumption) while ensuring high
accuracy. Specifically, we (1) exploit meta-learning and rehearsal strategies
to explicitly cope with data scarcity issues and ensure high accuracy, (2)
effectively combine lossless and lossy compression to significantly reduce the
resource requirements of CL and rehearsal samples, and (3) developed
hardware-aware system on embedded and IoT platforms considering the hardware
characteristics. As a result, LifeLearner achieves near-optimal CL performance,
falling short by only 2.8% on accuracy compared to an Oracle baseline. With
respect to the state-of-the-art (SOTA) Meta CL method, LifeLearner drastically
reduces the memory footprint (by 178.7x), end-to-end latency by 80.8-94.2%, and
energy consumption by 80.9-94.2%. In addition, we successfully deployed
LifeLearner on two edge devices and a microcontroller unit, thereby enabling
efficient CL on resource-constrained platforms where it would be impractical to
run SOTA methods and the far-reaching deployment of adaptable CL in a
ubiquitous manner. Code is available at
https://github.com/theyoungkwon/LifeLearner.
Related papers
- Efficient Continual Learning with Low Memory Footprint For Edge Device [6.818488262543482]
This paper proposes a compact algorithm called LightCL to overcome the forgetting problem of Continual Learning.
We first propose two new metrics of learning plasticity and memory stability to seek generalizability during CL.
In the experimental comparison, LightCL outperforms other SOTA methods in delaying forgetting and reduces at most $textbf6.16$times$$ memory footprint.
arXiv Detail & Related papers (2024-07-15T08:52:20Z) - Learn it or Leave it: Module Composition and Pruning for Continual Learning [48.07144492109635]
MoCL-P is a lightweight continual learning method that balances knowledge integration and computational overhead.
Our evaluation shows that MoCL-P achieves state-of-the-art performance and improves parameter efficiency by up to three times.
arXiv Detail & Related papers (2024-06-26T19:18:28Z) - CRSFL: Cluster-based Resource-aware Split Federated Learning for Continuous Authentication [5.636155173401658]
Split Learning (SL) and Federated Learning (FL) have emerged as promising technologies for training a decentralized Machine Learning (ML) model.
We propose combining these technologies to address the continuous authentication challenge while protecting user privacy.
arXiv Detail & Related papers (2024-05-12T06:08:21Z) - Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation [123.4883806344334]
We study a realistic Continual Learning setting where learning algorithms are granted a restricted computational budget per time step while training.
We apply this setting to large-scale semi-supervised Continual Learning scenarios with sparse label rates.
Our extensive analysis and ablations demonstrate that DietCL is stable under a full spectrum of label sparsity, computational budget, and various other ablations.
arXiv Detail & Related papers (2024-04-19T10:10:39Z) - Federated Fine-Tuning of LLMs on the Very Edge: The Good, the Bad, the Ugly [62.473245910234304]
This paper takes a hardware-centric approach to explore how Large Language Models can be brought to modern edge computing systems.
We provide a micro-level hardware benchmark, compare the model FLOP utilization to a state-of-the-art data center GPU, and study the network utilization in realistic conditions.
arXiv Detail & Related papers (2023-10-04T20:27:20Z) - Online Continual Learning Without the Storage Constraint [67.66235695269839]
We contribute a simple algorithm, which updates a kNN classifier continually along with a fixed, pretrained feature extractor.
It can adapt to rapidly changing streams, has zero stability gap, operates within tiny computational budgets, has low storage requirements by only storing features.
It can outperform existing methods by over 20% in accuracy on two large-scale online continual learning datasets.
arXiv Detail & Related papers (2023-05-16T08:03:07Z) - Awesome-META+: Meta-Learning Research and Learning Platform [3.7381507346856524]
Awesome-META+ is a complete and reliable meta-learning framework application and learning platform.
The project aims to promote the development of meta-learning and the expansion of the community.
arXiv Detail & Related papers (2023-04-24T03:09:25Z) - SparCL: Sparse Continual Learning on the Edge [43.51885725281063]
We propose a novel framework called Sparse Continual Learning(SparCL) to enable cost-effective continual learning on edge devices.
SparCL achieves both training acceleration and accuracy preservation through the synergy of three aspects: weight sparsity, data efficiency, and gradient sparsity.
arXiv Detail & Related papers (2022-09-20T05:24:48Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - A TinyML Platform for On-Device Continual Learning with Quantized Latent
Replays [66.62377866022221]
Latent Replay-based Continual Learning (CL) techniques enable online, serverless adaptation in principle.
We introduce a HW/SW platform for end-to-end CL based on a 10-core FP32-enabled parallel ultra-low-power processor.
Our results show that by combining these techniques, continual learning can be achieved in practice using less than 64MB of memory.
arXiv Detail & Related papers (2021-10-20T11:01:23Z) - LIMITS: Lightweight Machine Learning for IoT Systems with Resource
Limitations [8.647853543335662]
We present the novel open source framework LIghtweight Machine learning for IoT Systems (LIMITS)
LIMITS applies a platform-in-the-loop approach explicitly considering the actual compilation toolchain of the target IoT platform.
We apply and validate LIMITS in two case studies focusing on cellular data rate prediction and radio-based vehicle classification.
arXiv Detail & Related papers (2020-01-28T06:34:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.