On Learning the Geodesic Path for Incremental Learning
- URL: http://arxiv.org/abs/2104.08572v1
- Date: Sat, 17 Apr 2021 15:26:34 GMT
- Title: On Learning the Geodesic Path for Incremental Learning
- Authors: Christian Simon, Piotr Koniusz, Mehrtash Harandi
- Abstract summary: Neural networks notoriously suffer from the problem of catastrophic forgetting, the phenomenon of forgetting the past knowledge when acquiring new knowledge.
Overcoming catastrophic forgetting is of significant importance to emulate the process of "incremental learning"
State-of-the-art techniques for incremental learning make use of knowledge distillation towards preventing catastrophic forgetting.
- Score: 38.222736913855115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks notoriously suffer from the problem of catastrophic
forgetting, the phenomenon of forgetting the past knowledge when acquiring new
knowledge. Overcoming catastrophic forgetting is of significant importance to
emulate the process of "incremental learning", where the model is capable of
learning from sequential experience in an efficient and robust way.
State-of-the-art techniques for incremental learning make use of knowledge
distillation towards preventing catastrophic forgetting. Therein, one updates
the network while ensuring that the network's responses to previously seen
concepts remain stable throughout updates. This in practice is done by
minimizing the dissimilarity between current and previous responses of the
network one way or another. Our work contributes a novel method to the arsenal
of distillation techniques. In contrast to the previous state of the art, we
propose to firstly construct low-dimensional manifolds for previous and current
responses and minimize the dissimilarity between the responses along the
geodesic connecting the manifolds. This induces a more formidable knowledge
distillation with smooth properties which preserves the past knowledge more
efficiently as observed by our comprehensive empirical study.
Related papers
- Continual Learning via Manifold Expansion Replay [36.27348867557826]
Catastrophic forgetting is a major challenge to continual learning.
We propose a novel replay strategy called Replay Manifold Expansion (MaER)
We show that the proposed method significantly improves the accuracy in continual learning setup, outperforming the state of the arts.
arXiv Detail & Related papers (2023-10-12T05:09:27Z) - Subspace Distillation for Continual Learning [27.22147868163214]
We propose a knowledge distillation technique that takes into account the manifold structure of a neural network in learning novel tasks.
We demonstrate that the modeling with subspaces provides several intriguing properties, including robustness to noise.
Empirically, we observe that our proposed method outperforms various continual learning methods on several challenging datasets.
arXiv Detail & Related papers (2023-07-31T05:59:09Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Uncertainty-aware Contrastive Distillation for Incremental Semantic
Segmentation [46.14545656625703]
catastrophic forgetting is the tendency of neural networks to fail to preserve the knowledge acquired from old tasks when learning new tasks.
We propose a novel distillation framework, Uncertainty-aware Contrastive Distillation (method)
Our results demonstrate the advantage of the proposed distillation technique, which can be used in synergy with previous IL approaches.
arXiv Detail & Related papers (2022-03-26T15:32:12Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Preserving Earlier Knowledge in Continual Learning with the Help of All
Previous Feature Extractors [63.21036904487014]
Continual learning of new knowledge over time is one desirable capability for intelligent systems to recognize more and more classes of objects.
We propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.
Experiments on multiple classification tasks show that the proposed approach can effectively reduce the forgetting of old knowledge, achieving state-of-the-art continual learning performance.
arXiv Detail & Related papers (2021-04-28T07:49:24Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Understanding the Role of Training Regimes in Continual Learning [51.32945003239048]
Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially.
We study the effect of dropout, learning rate decay, and batch size, on forming training regimes that widen the tasks' local minima.
arXiv Detail & Related papers (2020-06-12T06:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.