Unsupervised Continual Learning via Self-Adaptive Deep Clustering
Approach
- URL: http://arxiv.org/abs/2106.14563v1
- Date: Mon, 28 Jun 2021 10:37:14 GMT
- Title: Unsupervised Continual Learning via Self-Adaptive Deep Clustering
Approach
- Authors: Mahardhika Pratama, Andri Ashfahani, Edwin Lughofer
- Abstract summary: Knowledge Retention in Self-Adaptive Deep Continual Learner, (KIERA) is proposed in this paper.
KIERA is developed from the notion of flexible deep clustering approach possessing an elastic network structure to cope with changing environments in the timely manner.
- Score: 20.628084936538055
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unsupervised continual learning remains a relatively uncharted territory in
the existing literature because the vast majority of existing works call for
unlimited access of ground truth incurring expensive labelling cost. Another
issue lies in the problem of task boundaries and task IDs which must be known
for model's updates or model's predictions hindering feasibility for real-time
deployment. Knowledge Retention in Self-Adaptive Deep Continual Learner,
(KIERA), is proposed in this paper. KIERA is developed from the notion of
flexible deep clustering approach possessing an elastic network structure to
cope with changing environments in the timely manner. The centroid-based
experience replay is put forward to overcome the catastrophic forgetting
problem. KIERA does not exploit any labelled samples for model updates while
featuring a task-agnostic merit. The advantage of KIERA has been numerically
validated in popular continual learning problems where it shows highly
competitive performance compared to state-of-the art approaches. Our
implementation is available in
\textit{\url{https://github.com/ContinualAL/KIERA}}.
Related papers
- Hierarchical Subspaces of Policies for Continual Offline Reinforcement Learning [19.463863037999054]
In dynamic domains such as autonomous robotics and video game simulations, agents must continuously adapt to new tasks while retaining previously acquired skills.
This ongoing process, known as Continual Reinforcement Learning, presents significant challenges, including the risk of forgetting past knowledge.
We introduce HIerarchical LOW-rank Subspaces of Policies (HILOW), a novel framework designed for continual learning in offline navigation settings.
arXiv Detail & Related papers (2024-12-19T14:00:03Z) - Continual Task Learning through Adaptive Policy Self-Composition [54.95680427960524]
CompoFormer is a structure-based continual transformer model that adaptively composes previous policies via a meta-policy network.
Our experiments reveal that CompoFormer outperforms conventional continual learning (CL) methods, particularly in longer task sequences.
arXiv Detail & Related papers (2024-11-18T08:20:21Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning [29.65600202138321]
In high-speed data stream environments, data do not pause to accommodate slow models.
Model's ignorance: the single-pass nature of OCL challenges models to learn effective features within constrained training time.
Model's myopia: the local learning nature of OCL leads the model to adopt overly simplified, task-specific features.
arXiv Detail & Related papers (2024-09-28T05:24:56Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - Instance exploitation for learning temporary concepts from sparsely
labeled drifting data streams [15.49323098362628]
Continual learning from streaming data sources becomes more and more popular.
dealing with dynamic and everlasting problems poses new challenges.
One of the most crucial limitations is that we cannot assume having access to a finite and complete data set.
arXiv Detail & Related papers (2020-09-20T08:11:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.