Feature Forgetting in Continual Representation Learning
- URL: http://arxiv.org/abs/2205.13359v1
- Date: Thu, 26 May 2022 13:38:56 GMT
- Title: Feature Forgetting in Continual Representation Learning
- Authors: Xiao Zhang, Dejing Dou, Ji Wu
- Abstract summary: representations do not suffer from "catastrophic forgetting" even in plain continual learning, but little further fact is known about its characteristics.
We devise a protocol for evaluating representation in continual learning, and then use it to present an overview of the basic trends of continual representation learning.
To study the feature forgetting problem, we create a synthetic dataset to identify and visualize the prevalence of feature forgetting in neural networks.
- Score: 48.89340526235304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In continual and lifelong learning, good representation learning can help
increase performance and reduce sample complexity when learning new tasks.
There is evidence that representations do not suffer from "catastrophic
forgetting" even in plain continual learning, but little further fact is known
about its characteristics. In this paper, we aim to gain more understanding
about representation learning in continual learning, especially on the feature
forgetting problem. We devise a protocol for evaluating representation in
continual learning, and then use it to present an overview of the basic trends
of continual representation learning, showing its consistent deficiency and
potential issues. To study the feature forgetting problem, we create a
synthetic dataset to identify and visualize the prevalence of feature
forgetting in neural networks. Finally, we propose a simple technique using
gating adapters to mitigate feature forgetting. We conclude by discussing that
improving representation learning benefits both old and new tasks in continual
learning.
Related papers
- Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks [9.82510084910641]
We propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts.
We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet.
arXiv Detail & Related papers (2023-11-02T22:00:23Z) - Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluation [3.2340528215722553]
A systematic task formulation of continual neural information retrieval is presented.
A comprehensive continual neural information retrieval framework is proposed.
Empirical evaluations illustrate that the proposed framework can successfully prevent catastrophic forgetting in neural information retrieval.
arXiv Detail & Related papers (2023-08-16T14:01:25Z) - Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting [34.402943976801424]
We study the coexistence of two phenomena that affect the quality of continually learned representations: knowledge accumulation and feature forgetting.
We show that even though forgetting in the representation can be small in absolute terms, forgetting in the representation tends to be just as catastrophic as forgetting at the output level.
arXiv Detail & Related papers (2023-04-03T12:45:52Z) - Understanding Self-Predictive Learning for Reinforcement Learning [61.62067048348786]
We study the learning dynamics of self-predictive learning for reinforcement learning.
We propose a novel self-predictive algorithm that learns two representations simultaneously.
arXiv Detail & Related papers (2022-12-06T20:43:37Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Probing Representation Forgetting in Supervised and Unsupervised
Continual Learning [14.462797749666992]
Catastrophic forgetting is associated with an abrupt loss of knowledge previously learned by a model.
We show that representation forgetting can lead to new insights on the effect of model capacity and loss function used in continual learning.
arXiv Detail & Related papers (2022-03-24T23:06:08Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning [43.504548777955854]
We study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.
We prove that contrastive learning using textbfReLU networks provably learns the desired sparse features if proper augmentations are adopted.
arXiv Detail & Related papers (2021-05-31T16:42:09Z) - Reward Propagation Using Graph Convolutional Networks [61.32891095232801]
We propose a new framework for learning potential functions by leveraging ideas from graph representation learning.
Our approach relies on Graph Convolutional Networks which we use as a key ingredient in combination with the probabilistic inference view of reinforcement learning.
arXiv Detail & Related papers (2020-10-06T04:38:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.