Balancing Continual Learning and Fine-tuning for Human Activity
Recognition
- URL: http://arxiv.org/abs/2401.02255v1
- Date: Thu, 4 Jan 2024 13:11:43 GMT
- Title: Balancing Continual Learning and Fine-tuning for Human Activity
Recognition
- Authors: Chi Ian Tang, Lorena Qendro, Dimitris Spathis, Fahim Kawsar, Akhil
Mathur, Cecilia Mascolo
- Abstract summary: Wearable-based Human Activity Recognition (HAR) is a key task in human-centric machine learning.
This work explores the adoption and adaptation of CaSSLe, a continual self-supervised learning model.
We also investigated the importance of different loss terms and explored the trade-off between knowledge retention and learning from new tasks.
- Score: 21.361301806478643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wearable-based Human Activity Recognition (HAR) is a key task in
human-centric machine learning due to its fundamental understanding of human
behaviours. Due to the dynamic nature of human behaviours, continual learning
promises HAR systems that are tailored to users' needs. However, because of the
difficulty in collecting labelled data with wearable sensors, existing
approaches that focus on supervised continual learning have limited
applicability, while unsupervised continual learning methods only handle
representation learning while delaying classifier training to a later stage.
This work explores the adoption and adaptation of CaSSLe, a continual
self-supervised learning model, and Kaizen, a semi-supervised continual
learning model that balances representation learning and down-stream
classification, for the task of wearable-based HAR. These schemes re-purpose
contrastive learning for knowledge retention and, Kaizen combines that with
self-training in a unified scheme that can leverage unlabelled and labelled
data for continual learning. In addition to comparing state-of-the-art
self-supervised continual learning schemes, we further investigated the
importance of different loss terms and explored the trade-off between knowledge
retention and learning from new tasks. In particular, our extensive evaluation
demonstrated that the use of a weighting factor that reflects the ratio between
learned and new classes achieves the best overall trade-off in continual
learning.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Benchmarking Continual Learning from Cognitive Perspectives [14.867136605254975]
Continual learning addresses the problem of continuously acquiring and transferring knowledge without catastrophic forgetting of old concepts.
There is a mismatch between cognitive properties and evaluation methods of continual learning models.
We propose to integrate model cognitive capacities and evaluation metrics into a unified evaluation paradigm.
arXiv Detail & Related papers (2023-12-06T06:27:27Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Assessing the State of Self-Supervised Human Activity Recognition using
Wearables [6.777825307593778]
Self-supervised learning in the field of wearables-based human activity recognition (HAR)
Self-supervised methods enable a host of new application domains such as, for example, domain adaptation and transfer across sensor positions, activities etc.
arXiv Detail & Related papers (2022-02-22T02:21:50Z) - Continually Learning Self-Supervised Representations with Projected
Functional Regularization [39.92600544186844]
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised methods.
These methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase with IID data.
To prevent forgetting of previous knowledge, we propose the usage of functional regularization.
arXiv Detail & Related papers (2021-12-30T11:59:23Z) - Continual Learning with Neuron Activation Importance [1.7513645771137178]
Continual learning is a concept of online learning with multiple sequential tasks.
One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks.
We propose a neuron activation importance-based regularization method for stable continual learning regardless of the order of tasks.
arXiv Detail & Related papers (2021-07-27T08:09:32Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Understand and Improve Contrastive Learning Methods for Visual
Representation: A Review [1.4650545418986058]
A promising alternative, self-supervised learning, has gained popularity because of its potential to learn effective data representations without manual labeling.
This literature review aims to provide an up-to-date analysis of the efforts of researchers to understand the key components and the limitations of self-supervised learning.
arXiv Detail & Related papers (2021-06-06T21:59:49Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.