Cognitively-Inspired Model for Incremental Learning Using a Few Examples
- URL: http://arxiv.org/abs/2002.12411v3
- Date: Thu, 30 Jul 2020 06:55:06 GMT
- Title: Cognitively-Inspired Model for Incremental Learning Using a Few Examples
- Authors: Ali Ayub and Alan Wagner
- Abstract summary: Incremental learning attempts to develop a classifier which learns continuously from a stream of data segregated into different classes.
Deep learning approaches suffer from catastrophic forgetting when learning classes incrementally, while most incremental learning approaches require a large amount of training data per class.
We propose a novel approach inspired by the concept learning model of the hippocampus and the neocortex that represents each image class as centroids.
- Score: 11.193504036335503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental learning attempts to develop a classifier which learns
continuously from a stream of data segregated into different classes. Deep
learning approaches suffer from catastrophic forgetting when learning classes
incrementally, while most incremental learning approaches require a large
amount of training data per class. We examine the problem of incremental
learning using only a few training examples, referred to as Few-Shot
Incremental Learning (FSIL). To solve this problem, we propose a novel approach
inspired by the concept learning model of the hippocampus and the neocortex
that represents each image class as centroids and does not suffer from
catastrophic forgetting. We evaluate our approach on three class-incremental
learning benchmarks: Caltech-101, CUBS-200-2011 and CIFAR-100 for incremental
and few-shot incremental learning and show that our approach achieves
state-of-the-art results in terms of classification accuracy over all learned
classes.
Related papers
- Cross-Class Feature Augmentation for Class Incremental Learning [45.91253737682168]
We propose a novel class incremental learning approach by incorporating a feature augmentation technique motivated by adversarial attacks.
The proposed approach has a unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes.
Our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios.
arXiv Detail & Related papers (2023-04-04T15:48:09Z) - SLCA: Slow Learner with Classifier Alignment for Continual Learning on a
Pre-trained Model [73.80068155830708]
We present an extensive analysis for continual learning on a pre-trained model (CLPM)
We propose a simple but extremely effective approach named Slow Learner with Alignment (SLCA)
Across a variety of scenarios, our proposal provides substantial improvements for CLPM.
arXiv Detail & Related papers (2023-03-09T08:57:01Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Improving Feature Generalizability with Multitask Learning in Class
Incremental Learning [12.632121107536843]
Many deep learning applications, like keyword spotting, require the incorporation of new concepts (classes) over time, referred to as Class Incremental Learning (CIL)
The major challenge in CIL is catastrophic forgetting, i.e., preserving as much of the old knowledge as possible while learning new tasks.
We propose multitask learning during base model training to improve the feature generalizability.
Our approach enhances the average incremental learning accuracy by up to 5.5%, which enables more reliable and accurate keyword spotting over time.
arXiv Detail & Related papers (2022-04-26T07:47:54Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Class-incremental learning: survey and performance evaluation on image
classification [38.27344435075399]
Incremental learning allows for efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data.
The main challenge for incremental learning is catastrophic forgetting, which refers to the precipitous drop in performance on previously learned tasks after learning a new one.
Recently, we have seen a shift towards class-incremental learning where the learner must discriminate at inference time between all classes seen in previous tasks without recourse to a task-ID.
arXiv Detail & Related papers (2020-10-28T23:28:15Z) - Few-Shot Class-Incremental Learning [68.75462849428196]
We focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
We represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes.
arXiv Detail & Related papers (2020-04-23T03:38:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.