Continually Learning Self-Supervised Representations with Projected
Functional Regularization
- URL: http://arxiv.org/abs/2112.15022v1
- Date: Thu, 30 Dec 2021 11:59:23 GMT
- Title: Continually Learning Self-Supervised Representations with Projected
Functional Regularization
- Authors: Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov,
Joost van de Weijer
- Abstract summary: Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised methods.
These methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase with IID data.
To prevent forgetting of previous knowledge, we propose the usage of functional regularization.
- Score: 39.92600544186844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent self-supervised learning methods are able to learn high-quality image
representations and are closing the gap with supervised methods. However, these
methods are unable to acquire new knowledge incrementally -- they are, in fact,
mostly used only as a pre-training phase with IID data. In this work we
investigate self-supervised methods in continual learning regimes without
additional memory or replay. To prevent forgetting of previous knowledge, we
propose the usage of functional regularization. We will show that naive
functional regularization, also known as feature distillation, leads to low
plasticity and therefore seriously limits continual learning performance. To
address this problem, we propose Projected Functional Regularization where a
separate projection network ensures that the newly learned feature space
preserves information of the previous feature space, while allowing for the
learning of new features. This allows us to prevent forgetting while
maintaining the plasticity of the learner. Evaluation against other incremental
learning approaches applied to self-supervision demonstrates that our method
obtains competitive performance in different scenarios and on multiple
datasets.
Related papers
- Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - VERSE: Virtual-Gradient Aware Streaming Lifelong Learning with Anytime
Inference [36.61783715563126]
Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning without forgetting.
We introduce a novel approach to lifelong learning, which is streaming (observes each training example only once)
We propose a novel emphvirtual gradients based approach for continual representation learning which adapts to each new example while also generalizing well on past data to prevent catastrophic forgetting.
arXiv Detail & Related papers (2023-09-15T07:54:49Z) - Domain-Aware Augmentations for Unsupervised Online General Continual
Learning [7.145581090959242]
This paper proposes a novel approach that enhances memory usage for contrastive learning in Unsupervised Online General Continual Learning (UOGCL)
Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups.
Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.
arXiv Detail & Related papers (2023-09-13T11:45:21Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Learning Invariant Representation for Continual Learning [5.979373021392084]
A key challenge in Continual learning is catastrophically forgetting previously learned tasks when the agent faces a new one.
We propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL)
Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer.
arXiv Detail & Related papers (2021-01-15T15:12:51Z) - Continual Deep Learning by Functional Regularisation of Memorable Past [95.97578574330934]
Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past.
We propose a new functional-regularisation approach that utilises a few memorable past examples crucial to avoid forgetting.
Our method achieves state-of-the-art performance on standard benchmarks and opens a new direction for life-long learning where regularisation and memory-based methods are naturally combined.
arXiv Detail & Related papers (2020-04-29T10:47:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.