XtarNet: Learning to Extract Task-Adaptive Representation for
Incremental Few-Shot Learning
- URL: http://arxiv.org/abs/2003.08561v2
- Date: Wed, 1 Jul 2020 07:08:02 GMT
- Title: XtarNet: Learning to Extract Task-Adaptive Representation for
Incremental Few-Shot Learning
- Authors: Sung Whan Yoon, Do-Yeon Kim, Jun Seo, Jaekyun Moon
- Abstract summary: We propose XtarNet, which learns to extract task-adaptive representation (TAR) for facilitating incremental few-shot learning.
The TAR contains effective information for classifying both novel and base categories.
XtarNet achieves state-of-the-art incremental few-shot learning performance.
- Score: 24.144499302568565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning novel concepts while preserving prior knowledge is a long-standing
challenge in machine learning. The challenge gets greater when a novel task is
given with only a few labeled examples, a problem known as incremental few-shot
learning. We propose XtarNet, which learns to extract task-adaptive
representation (TAR) for facilitating incremental few-shot learning. The method
utilizes a backbone network pretrained on a set of base categories while also
employing additional modules that are meta-trained across episodes. Given a new
task, the novel feature extracted from the meta-trained modules is mixed with
the base feature obtained from the pretrained model. The process of combining
two different features provides TAR and is also controlled by meta-trained
modules. The TAR contains effective information for classifying both novel and
base categories. The base and novel classifiers quickly adapt to a given task
by utilizing the TAR. Experiments on standard image datasets indicate that
XtarNet achieves state-of-the-art incremental few-shot learning performance.
The concept of TAR can also be used in conjunction with existing incremental
few-shot learning methods; extensive simulation results in fact show that
applying TAR enhances the known methods significantly.
Related papers
- Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Improving Feature Generalizability with Multitask Learning in Class
Incremental Learning [12.632121107536843]
Many deep learning applications, like keyword spotting, require the incorporation of new concepts (classes) over time, referred to as Class Incremental Learning (CIL)
The major challenge in CIL is catastrophic forgetting, i.e., preserving as much of the old knowledge as possible while learning new tasks.
We propose multitask learning during base model training to improve the feature generalizability.
Our approach enhances the average incremental learning accuracy by up to 5.5%, which enables more reliable and accurate keyword spotting over time.
arXiv Detail & Related papers (2022-04-26T07:47:54Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Grad2Task: Improved Few-shot Text Classification Using Gradients for
Task Representation [24.488427641442694]
We propose a novel conditional neural process-based approach for few-shot text classification.
Our key idea is to represent each task using gradient information from a base model.
Our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches.
arXiv Detail & Related papers (2022-01-27T15:29:30Z) - MetaKernel: Learning Variational Random Features with Limited Labels [120.90737681252594]
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
arXiv Detail & Related papers (2021-05-08T21:24:09Z) - Meta-Regularization by Enforcing Mutual-Exclusiveness [0.8057006406834467]
We propose a regularization technique for meta-learning models that gives the model designer more control over the information flow during meta-training.
Our proposed regularization function shows an accuracy boost of $sim$ $36%$ on the Omniglot dataset.
arXiv Detail & Related papers (2021-01-24T22:57:19Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.