Continual Learning at the Edge: Real-Time Training on Smartphone Devices
- URL: http://arxiv.org/abs/2105.13127v1
- Date: Mon, 24 May 2021 12:00:31 GMT
- Title: Continual Learning at the Edge: Real-Time Training on Smartphone Devices
- Authors: Lorenzo Pellegrini, Vincenzo Lomonaco, Gabriele Graffieti, Davide
Maltoni
- Abstract summary: This paper describes the implementation and deployment of a hybrid learning strategy (AR1*) on a native Android application for real-time on-device personalization without forgetting.
Our benchmark, based on an extension of the CORe50 dataset, shows the efficiency and effectiveness of our solution.
- Score: 11.250227901473952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-device training for personalized learning is a challenging research
problem. Being able to quickly adapt deep prediction models at the edge is
necessary to better suit personal user needs. However, adaptation on the edge
poses some questions on both the efficiency and sustainability of the learning
process and on the ability to work under shifting data distributions. Indeed,
naively fine-tuning a prediction model only on the newly available data results
in catastrophic forgetting, a sudden erasure of previously acquired knowledge.
In this paper, we detail the implementation and deployment of a hybrid
continual learning strategy (AR1*) on a native Android application for
real-time on-device personalization without forgetting. Our benchmark, based on
an extension of the CORe50 dataset, shows the efficiency and effectiveness of
our solution.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - Machine Unlearning on Pre-trained Models by Residual Feature Alignment Using LoRA [15.542668474378633]
We propose a novel and efficient machine unlearning method on pre-trained models.
We leverage LoRA to decompose the model's intermediate features into pre-trained features and residual features.
The method aims to learn the zero residuals on the retained set and shifted residuals on the unlearning set.
arXiv Detail & Related papers (2024-11-13T08:56:35Z) - Edge Unlearning is Not "on Edge"! An Adaptive Exact Unlearning System on Resource-Constrained Devices [26.939025828011196]
The right to be forgotten mandates that machine learning models enable the erasure of a data owner's data and information from a trained model.
We propose a Constraint-aware Adaptive Exact Unlearning System at the network Edge (CAUSE) to enable exact unlearning on resource-constrained devices.
arXiv Detail & Related papers (2024-10-14T03:28:09Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer
Level Attack and Knowledge Distillation [21.587358050012032]
We propose a fast and novel machine unlearning paradigm at the layer level called layer attack unlearning.
In this work, we introduce the Partial-PGD algorithm to locate the samples to forget efficiently.
We also use Knowledge Distillation (KD) to reliably learn the decision boundaries from the teacher.
arXiv Detail & Related papers (2023-12-28T04:38:06Z) - Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline
Pre-Training with Model Based Augmentation [59.899714450049494]
offline pre-training can produce sub-optimal policies and lead to degraded online reinforcement learning performance.
We propose a model-based data augmentation strategy to maximize the benefits of offline reinforcement learning pre-training and reduce the scale of data needed to be effective.
arXiv Detail & Related papers (2023-12-15T14:49:41Z) - Developing a Resource-Constraint EdgeAI model for Surface Defect
Detection [1.338174941551702]
We propose a lightweight EdgeAI architecture modified from Xception for on-device training in a resource-constraint edge environment.
We evaluate our model on a PCB defect detection task and compare its performance against existing lightweight models.
Our method can be applied to other resource-constraint applications while maintaining significant performance.
arXiv Detail & Related papers (2023-12-04T15:28:31Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Privacy Enhancing Machine Learning via Removal of Unwanted Dependencies [21.97951347784442]
This paper studies new variants of supervised and adversarial learning methods, which remove the sensitive information in the data before they are sent out for a particular application.
The explored methods optimize privacy preserving feature mappings and predictive models simultaneously in an end-to-end fashion.
Experimental results on mobile sensing and face datasets demonstrate that our models can successfully maintain the utility performances of predictive models while causing sensitive predictions to perform poorly.
arXiv Detail & Related papers (2020-07-30T19:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.