Split-and-Bridge: Adaptable Class Incremental Learning within a Single
Neural Network
- URL: http://arxiv.org/abs/2107.01349v1
- Date: Sat, 3 Jul 2021 05:51:53 GMT
- Title: Split-and-Bridge: Adaptable Class Incremental Learning within a Single
Neural Network
- Authors: Jong-Yeong Kim and Dong-Wan Choi
- Abstract summary: Continual learning is a major problem in the deep learning community.
In this paper, we propose a novel continual learning method, called Split-and-Bridge.
- Score: 0.20305676256390928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning has been a major problem in the deep learning community,
where the main challenge is how to effectively learn a series of newly arriving
tasks without forgetting the knowledge of previous tasks. Initiated by Learning
without Forgetting (LwF), many of the existing works report that knowledge
distillation is effective to preserve the previous knowledge, and hence they
commonly use a soft label for the old task, namely a knowledge distillation
(KD) loss, together with a class label for the new task, namely a cross entropy
(CE) loss, to form a composite loss for a single neural network. However, this
approach suffers from learning the knowledge by a CE loss as a KD loss often
more strongly influences the objective function when they are in a competitive
situation within a single network. This could be a critical problem
particularly in a class incremental scenario, where the knowledge across tasks
as well as within the new task, both of which can only be acquired by a CE
loss, is essentially learned due to the existence of a unified classifier. In
this paper, we propose a novel continual learning method, called
Split-and-Bridge, which can successfully address the above problem by partially
splitting a neural network into two partitions for training the new task
separated from the old task and re-connecting them for learning the knowledge
across tasks. In our thorough experimental analysis, our Split-and-Bridge
method outperforms the state-of-the-art competitors in KD-based continual
learning.
Related papers
- Class-Incremental Few-Shot Event Detection [68.66116956283575]
This paper proposes a new task, called class-incremental few-shot event detection.
This task faces two problems, i.e., old knowledge forgetting and new class overfitting.
To solve these problems, this paper presents a novel knowledge distillation and prompt learning based method, called Prompt-KD.
arXiv Detail & Related papers (2024-04-02T09:31:14Z) - Negotiated Representations to Prevent Forgetting in Machine Learning
Applications [0.0]
Catastrophic forgetting is a significant challenge in the field of machine learning.
We propose a novel method for preventing catastrophic forgetting in machine learning applications.
arXiv Detail & Related papers (2023-11-30T22:43:50Z) - Dense Network Expansion for Class Incremental Learning [61.00081795200547]
State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task.
A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity.
It outperforms the previous SOTA methods by a margin of 4% in terms of accuracy, with similar or even smaller model scale.
arXiv Detail & Related papers (2023-03-22T16:42:26Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Beyond Not-Forgetting: Continual Learning with Backward Knowledge
Transfer [39.99577526417276]
In continual learning (CL) an agent can improve the learning performance of both a new task and old' tasks.
Most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks.
We propose a new CL method with Backward knowlEdge tRansfer (CUBER) for a fixed capacity neural network without data replay.
arXiv Detail & Related papers (2022-11-01T23:55:51Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Center Loss Regularization for Continual Learning [0.0]
In general, neural networks lack the ability to learn different tasks sequentially.
Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks.
We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods.
arXiv Detail & Related papers (2021-10-21T17:46:44Z) - Contrast R-CNN for Continual Learning in Object Detection [13.79299067527118]
We propose a new scheme for continual learning of object detection, namely Contrast R-CNN.
In our paper, we propose a new scheme for continual learning of object detection, namely Contrast R-CNN.
arXiv Detail & Related papers (2021-07-11T14:09:10Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.