Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors
- URL: http://arxiv.org/abs/2209.00591v1
- Date: Thu, 1 Sep 2022 17:05:20 GMT
- Title: Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors
- Authors: Alessandro Avi, Andrea Albanese, Davide Brunelli
- Abstract summary: This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tiny machine learning (TinyML) in IoT systems exploits MCUs as edge devices
for data processing. However, traditional TinyML methods can only perform
inference, limited to static environments or classes. Real case scenarios
usually work in dynamic environments, thus drifting the context where the
original neural model is no more suitable. For this reason, pre-trained models
reduce accuracy and reliability during their lifetime because the data recorded
slowly becomes obsolete or new patterns appear. Continual learning strategies
maintain the model up to date, with runtime fine-tuning of the parameters. This
paper compares four state-of-the-art algorithms in two real applications: i)
gesture recognition based on accelerometer data and ii) image classification.
Our results confirm these systems' reliability and the feasibility of deploying
them in tiny-memory MCUs, with a drop in the accuracy of a few percentage
points with respect to the original models for unconstrained computing
platforms.
Related papers
- A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption [0.4345992906143838]
A new algorithm for incremental learning in the context of Tiny Machine learning (TinyML) is presented.
It is optimized for low-performance and energy efficient embedded devices.
Results show that the proposed algorithm offers a promising approach for TinyML incremental learning on embedded devices.
arXiv Detail & Related papers (2024-09-11T09:02:33Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Explainable Lifelong Stream Learning Based on "Glocal" Pairwise Fusion [17.11983414681928]
Real-time on-device continual learning applications are used on mobile phones, consumer robots, and smart appliances.
This study presents the Explainable Lifelong Learning (ExLL) model, which incorporates several important traits.
ExLL outperforms all algorithms for accuracy in the majority of the tested scenarios.
arXiv Detail & Related papers (2023-06-23T09:54:48Z) - Benchmarking Learning Efficiency in Deep Reservoir Computing [23.753943709362794]
We introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data.
We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing.
arXiv Detail & Related papers (2022-09-29T08:16:52Z) - Value-Consistent Representation Learning for Data-Efficient
Reinforcement Learning [105.70602423944148]
We propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making.
Instead of aligning this imagined state with a real state returned by the environment, VCR applies a $Q$-value head on both states and obtains two distributions of action values.
It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
arXiv Detail & Related papers (2022-06-25T03:02:25Z) - Memory Efficient Continual Learning for Neural Text Classification [10.70710638820641]
We devise a method to perform text classification using pre-trained models on a sequence of classification tasks provided in sequence.
We empirically demonstrate that our method requires significantly less model parameters compared to other state of the art methods.
While our method suffers little forgetting, it retains a predictive performance on-par with state of the art but less memory efficient methods.
arXiv Detail & Related papers (2022-03-09T10:57:59Z) - Benchmarking Detection Transfer Learning with Vision Transformers [60.97703494764904]
complexity of object detection methods can make benchmarking non-trivial when new architectures, such as Vision Transformer (ViT) models, arrive.
We present training techniques that overcome these challenges, enabling the use of standard ViT models as the backbone of Mask R-CNN.
Our results show that recent masking-based unsupervised learning methods may, for the first time, provide convincing transfer learning improvements on COCO.
arXiv Detail & Related papers (2021-11-22T18:59:15Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Training Deep Neural Networks with Constrained Learning Parameters [4.917317902787792]
A significant portion of deep learning tasks would run on edge computing systems.
We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA)
CoNNTrA trains deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets.
Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
arXiv Detail & Related papers (2020-09-01T16:20:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.