Reducing Representation Drift in Online Continual Learning
- URL: http://arxiv.org/abs/2104.05025v1
- Date: Sun, 11 Apr 2021 15:19:30 GMT
- Title: Reducing Representation Drift in Online Continual Learning
- Authors: Lucas Caccia, Rahaf Aljundi, Tinne Tuytelaars, Joelle Pineau, Eugene
Belilovsky
- Abstract summary: We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
- Score: 87.71558506591937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the online continual learning paradigm, where agents must learn from
a changing distribution with constrained memory and compute. Previous work
often tackle catastrophic forgetting by overcoming changes in the space of
model parameters. In this work we instead focus on the change in
representations of previously observed data due to the introduction of
previously unobserved class samples in the incoming data stream. We highlight
the issues that arise in the practical setting where new classes must be
distinguished between all previous classes. Starting from a popular approach,
experience replay, we consider a metric learning based loss function, the
triplet loss, which allows us to more explicitly constrain the behavior of
representations. We hypothesize and empirically confirm that the selection of
negatives used in the triplet loss plays a major role in the representation
change, or drift, of previously observed data and can be greatly reduced by
appropriate negative selection. Motivated by this we further introduce a simple
adjustment to the standard cross entropy loss used in prior experience replay
that achieves similar effect. Our approach greatly improves the performance of
experience replay and obtains state-of-the-art on several existing benchmarks
in online continual learning, while remaining efficient in both memory and
compute.
Related papers
- Random Representations Outperform Online Continually Learned Representations [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
Our method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all online continual learning benchmarks.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Probing Representation Forgetting in Supervised and Unsupervised
Continual Learning [14.462797749666992]
Catastrophic forgetting is associated with an abrupt loss of knowledge previously learned by a model.
We show that representation forgetting can lead to new insights on the effect of model capacity and loss function used in continual learning.
arXiv Detail & Related papers (2022-03-24T23:06:08Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.