iTAML: An Incremental Task-Agnostic Meta-learning Approach
- URL: http://arxiv.org/abs/2003.11652v1
- Date: Wed, 25 Mar 2020 21:42:48 GMT
- Title: iTAML: An Incremental Task-Agnostic Meta-learning Approach
- Authors: Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan,
Mubarak Shah
- Abstract summary: Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
- Score: 123.10294801296926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans can continuously learn new knowledge as their experience grows. In
contrast, previous learning in deep neural networks can quickly fade out when
they are trained on a new task. In this paper, we hypothesize this problem can
be avoided by learning a set of generalized parameters, that are neither
specific to old nor new tasks. In this pursuit, we introduce a novel
meta-learning approach that seeks to maintain an equilibrium between all the
encountered tasks. This is ensured by a new meta-update rule which avoids
catastrophic forgetting. In comparison to previous meta-learning techniques,
our approach is task-agnostic. When presented with a continuum of data, our
model automatically identifies the task and quickly adapts to it with just a
single update. We perform extensive experiments on five datasets in a
class-incremental setting, leading to significant improvements over the state
of the art methods (e.g., a 21.3% boost on CIFAR100 with 10 incremental tasks).
Specifically, on large-scale datasets that generally prove difficult cases for
incremental learning, our approach delivers absolute gains as high as 19.1% and
7.4% on ImageNet and MS-Celeb datasets, respectively.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot
Image Recognition [43.44508415430047]
We consider the problem of Incremental Meta-Learning (IML) in which classes are presented incrementally in discrete tasks.
We propose an approach to IML, which we call Episodic Replay Distillation (ERD)
ERD mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning.
arXiv Detail & Related papers (2021-11-09T08:32:05Z) - DIODE: Dilatable Incremental Object Detection [15.59425584971872]
Conventional deep learning models lack the capability of preserving previously learned knowledge.
We propose a dilatable incremental object detector (DIODE) for multi-step incremental detection tasks.
Our method achieves up to 6.4% performance improvement by increasing the number of parameters by just 1.2% for each newly learned task.
arXiv Detail & Related papers (2021-08-12T09:45:57Z) - Continual Learning via Bit-Level Information Preserving [88.32450740325005]
We study the continual learning process through the lens of information theory.
We propose Bit-Level Information Preserving (BLIP) that preserves the information gain on model parameters.
BLIP achieves close to zero forgetting while only requiring constant memory overheads throughout continual learning.
arXiv Detail & Related papers (2021-05-10T15:09:01Z) - Rectification-based Knowledge Retention for Continual Learning [49.1447478254131]
Deep learning models suffer from catastrophic forgetting when trained in an incremental learning setting.
We propose a novel approach to address the task incremental learning problem, which involves training a model on new tasks that arrive in an incremental manner.
Our approach can be used in both the zero-shot and non zero-shot task incremental learning settings.
arXiv Detail & Related papers (2021-03-30T18:11:30Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.