Rethinking Task-Incremental Learning Baselines
- URL: http://arxiv.org/abs/2205.11367v1
- Date: Mon, 23 May 2022 14:52:38 GMT
- Title: Rethinking Task-Incremental Learning Baselines
- Authors: Md Sazzad Hossain, Pritom Saha, Townim Faisal Chowdhury, Shafin
Rahman, Fuad Rahman, Nabeel Mohammed
- Abstract summary: We present a simple yet effective adjustment network (SAN) for task incremental learning that achieves near state-of-the-art performance.
We investigate this approach on both 3D point cloud object (ModelNet40) and 2D image (CIFAR10, CIFAR100, MiniImageNet, MNIST, PermutedMNIST, notMNIST, SVHN, and FashionMNIST) recognition tasks.
- Score: 5.771817160915079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is common to have continuous streams of new data that need to be
introduced in the system in real-world applications. The model needs to learn
newly added capabilities (future tasks) while retaining the old knowledge (past
tasks). Incremental learning has recently become increasingly appealing for
this problem. Task-incremental learning is a kind of incremental learning where
task identity of newly included task (a set of classes) remains known during
inference. A common goal of task-incremental methods is to design a network
that can operate on minimal size, maintaining decent performance. To manage the
stability-plasticity dilemma, different methods utilize replay memory of past
tasks, specialized hardware, regularization monitoring etc. However, these
methods are still less memory efficient in terms of architecture growth or
input data costs. In this study, we present a simple yet effective adjustment
network (SAN) for task incremental learning that achieves near state-of-the-art
performance while using minimal architectural size without using memory
instances compared to previous state-of-the-art approaches. We investigate this
approach on both 3D point cloud object (ModelNet40) and 2D image (CIFAR10,
CIFAR100, MiniImageNet, MNIST, PermutedMNIST, notMNIST, SVHN, and FashionMNIST)
recognition tasks and establish a strong baseline result for a fair comparison
with existing methods. On both 2D and 3D domains, we also observe that SAN is
primarily unaffected by different task orders in a task-incremental setting.
Related papers
- Dense Network Expansion for Class Incremental Learning [61.00081795200547]
State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task.
A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity.
It outperforms the previous SOTA methods by a margin of 4% in terms of accuracy, with similar or even smaller model scale.
arXiv Detail & Related papers (2023-03-22T16:42:26Z) - Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning [60.501201259732625]
We introduce task-adaptive saliency for EFCIL and propose a new framework, which we call Task-Adaptive Saliency Supervision (TASS)
Our experiments demonstrate that our method can better preserve saliency maps across tasks and achieve state-of-the-art results on the CIFAR-100, Tiny-ImageNet, and ImageNet-Subset EFCIL benchmarks.
arXiv Detail & Related papers (2022-12-16T02:43:52Z) - Neural Weight Search for Scalable Task Incremental Learning [6.413209417643468]
Task incremental learning aims to enable a system to maintain its performance on previously learned tasks while learning new tasks, solving the problem of catastrophic forgetting.
One promising approach is to build an individual network or sub-network for future tasks.
This leads to an ever-growing memory due to saving extra weights for new tasks and how to address this issue has remained an open problem in task incremental learning.
arXiv Detail & Related papers (2022-11-24T23:30:23Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Center Loss Regularization for Continual Learning [0.0]
In general, neural networks lack the ability to learn different tasks sequentially.
Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks.
We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods.
arXiv Detail & Related papers (2021-10-21T17:46:44Z) - GROWN: GRow Only When Necessary for Continual Learning [39.56829374809613]
Catastrophic forgetting is a notorious issue in deep learning, referring to the fact that Deep Neural Networks (DNN) could forget the knowledge about earlier tasks when learning new tasks.
To address this issue, continual learning has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting.
GROWN is a novel end-to-end continual learning framework to dynamically grow the model only when necessary.
arXiv Detail & Related papers (2021-10-03T02:31:04Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - SpaceNet: Make Free Space For Continual Learning [15.914199054779438]
We propose a novel architectural-based method referred as SpaceNet for class incremental learning scenario.
SpaceNet trains sparse deep neural networks from scratch in an adaptive way that compresses the sparse connections of each task in a compact number of neurons.
Experimental results show the robustness of our proposed method against catastrophic forgetting old tasks and the efficiency of SpaceNet in utilizing the available capacity of the model.
arXiv Detail & Related papers (2020-07-15T11:21:31Z) - Continual Learning Using Multi-view Task Conditional Neural Networks [6.27221711890162]
Conventional deep learning models have limited capacity in learning multiple tasks sequentially.
We propose Multi-view Task Conditional Neural Networks (Mv-TCNN) that does not require to known the reoccurring tasks in advance.
arXiv Detail & Related papers (2020-05-08T01:03:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.