Domain-Aware Augmentations for Unsupervised Online General Continual
Learning
- URL: http://arxiv.org/abs/2309.06896v1
- Date: Wed, 13 Sep 2023 11:45:21 GMT
- Title: Domain-Aware Augmentations for Unsupervised Online General Continual
Learning
- Authors: Nicolas Michel, Romain Negrel, Giovanni Chierchia, Jean-Fran\c{c}ois
Bercher
- Abstract summary: This paper proposes a novel approach that enhances memory usage for contrastive learning in Unsupervised Online General Continual Learning (UOGCL)
Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups.
Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.
- Score: 7.145581090959242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Learning has been challenging, especially when dealing with
unsupervised scenarios such as Unsupervised Online General Continual Learning
(UOGCL), where the learning agent has no prior knowledge of class boundaries or
task change information. While previous research has focused on reducing
forgetting in supervised setups, recent studies have shown that self-supervised
learners are more resilient to forgetting. This paper proposes a novel approach
that enhances memory usage for contrastive learning in UOGCL by defining and
using stream-dependent data augmentations together with some implementation
tricks. Our proposed method is simple yet effective, achieves state-of-the-art
results compared to other unsupervised approaches in all considered setups, and
reduces the gap between supervised and unsupervised continual learning. Our
domain-aware augmentation procedure can be adapted to other replay-based
methods, making it a promising strategy for continual learning.
Related papers
- PIVOT: Prompting for Video Continual Learning [50.80141083993668]
We introduce PIVOT, a novel method that leverages extensive knowledge in pre-trained models from the image domain.
Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
arXiv Detail & Related papers (2022-12-09T13:22:27Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Continually Learning Self-Supervised Representations with Projected
Functional Regularization [39.92600544186844]
Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised methods.
These methods are unable to acquire new knowledge incrementally -- they are, in fact, mostly used only as a pre-training phase with IID data.
To prevent forgetting of previous knowledge, we propose the usage of functional regularization.
arXiv Detail & Related papers (2021-12-30T11:59:23Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Progressive Stage-wise Learning for Unsupervised Feature Representation
Enhancement [83.49553735348577]
We propose the Progressive Stage-wise Learning (PSL) framework for unsupervised learning.
Our experiments show that PSL consistently improves results for the leading unsupervised learning methods.
arXiv Detail & Related papers (2021-06-10T07:33:19Z) - Continual Learning From Unlabeled Data Via Deep Clustering [7.704949298975352]
Continual learning aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever new task arrives.
We introduce a new framework to make continual learning feasible in unsupervised mode by using pseudo label obtained from cluster assignments to update model.
arXiv Detail & Related papers (2021-04-14T23:46:17Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Self-supervised Video Object Segmentation [76.83567326586162]
The objective of this paper is self-supervised representation learning, with the goal of solving semi-supervised video object segmentation (a.k.a. dense tracking)
We make the following contributions: (i) we propose to improve the existing self-supervised approach, with a simple, yet more effective memory mechanism for long-term correspondence matching; (ii) by augmenting the self-supervised approach with an online adaptation module, our method successfully alleviates tracker drifts caused by spatial-temporal discontinuity; (iv) we demonstrate state-of-the-art results among the self-supervised approaches on DAVIS-2017 and YouTube
arXiv Detail & Related papers (2020-06-22T17:55:59Z) - Learning by Analogy: Reliable Supervision from Transformations for
Unsupervised Optical Flow Estimation [83.23707895728995]
Unsupervised learning of optical flow has emerged as a promising alternative to supervised methods.
We present a framework to use more reliable supervision from transformations.
Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods.
arXiv Detail & Related papers (2020-03-29T14:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.