Exploring System Performance of Continual Learning for Mobile and
Embedded Sensing Applications
- URL: http://arxiv.org/abs/2110.13290v1
- Date: Mon, 25 Oct 2021 22:06:26 GMT
- Title: Exploring System Performance of Continual Learning for Mobile and
Embedded Sensing Applications
- Authors: Young D. Kwon, Jagmohan Chauhan, Abhishek Kumar, Pan Hui, and Cecilia
Mascolo
- Abstract summary: We conduct the first comprehensive empirical study that quantifies the performance of three predominant continual learning schemes.
We implement an end-to-end continual learning framework on edge devices.
We demonstrate for the first time that it is feasible and practical to run continual learning on-device with a limited memory budget.
- Score: 19.334890205028568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning approaches help deep neural network models adapt and learn
incrementally by trying to solve catastrophic forgetting. However, whether
these existing approaches, applied traditionally to image-based tasks, work
with the same efficacy to the sequential time series data generated by mobile
or embedded sensing systems remains an unanswered question.
To address this void, we conduct the first comprehensive empirical study that
quantifies the performance of three predominant continual learning schemes
(i.e., regularization, replay, and replay with examples) on six datasets from
three mobile and embedded sensing applications in a range of scenarios having
different learning complexities. More specifically, we implement an end-to-end
continual learning framework on edge devices. Then we investigate the
generalizability, trade-offs between performance, storage, computational costs,
and memory footprint of different continual learning methods.
Our findings suggest that replay with exemplars-based schemes such as iCaRL
has the best performance trade-offs, even in complex scenarios, at the expense
of some storage space (few MBs) for training examples (1% to 5%). We also
demonstrate for the first time that it is feasible and practical to run
continual learning on-device with a limited memory budget. In particular, the
latency on two types of mobile and embedded devices suggests that both
incremental learning time (few seconds - 4 minutes) and training time (1 - 75
minutes) across datasets are acceptable, as training could happen on the device
when the embedded device is charging thereby ensuring complete data privacy.
Finally, we present some guidelines for practitioners who want to apply a
continual learning paradigm for mobile sensing tasks.
Related papers
- A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Explainable Lifelong Stream Learning Based on "Glocal" Pairwise Fusion [17.11983414681928]
Real-time on-device continual learning applications are used on mobile phones, consumer robots, and smart appliances.
This study presents the Explainable Lifelong Learning (ExLL) model, which incorporates several important traits.
ExLL outperforms all algorithms for accuracy in the majority of the tested scenarios.
arXiv Detail & Related papers (2023-06-23T09:54:48Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Self-Supervised Human Activity Recognition with Localized Time-Frequency
Contrastive Representation Learning [16.457778420360537]
We propose a self-supervised learning solution for human activity recognition with smartphone accelerometer data.
We develop a model that learns strong representations from accelerometer signals, while reducing the model's reliance on class labels.
We evaluate the performance of the proposed solution on three datasets, namely MotionSense, HAPT, and HHAR.
arXiv Detail & Related papers (2022-08-26T22:47:18Z) - Online Continual Learning for Embedded Devices [41.31925039882364]
Real-time on-device continual learning is needed for new applications such as home robots, user personalization on smartphones, and augmented/virtual reality headsets.
embedded devices have limited memory and compute capacity.
Online continual learning models have been developed, but their effectiveness for embedded applications has not been rigorously studied.
arXiv Detail & Related papers (2022-03-21T00:23:09Z) - Anomaly Detection in Video via Self-Supervised and Multi-Task Learning [113.81927544121625]
Anomaly detection in video is a challenging computer vision problem.
In this paper, we approach anomalous event detection in video through self-supervised and multi-task learning at the object level.
arXiv Detail & Related papers (2020-11-15T10:21:28Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Neuromodulated Neural Architectures with Local Error Signals for
Memory-Constrained Online Continual Learning [4.2903672492917755]
We develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation.
We demonstrate the efficacy of our approach on both single task and continual learning setting.
arXiv Detail & Related papers (2020-07-16T07:41:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.