Memory-Efficient Incremental Learning Through Feature Adaptation
- URL: http://arxiv.org/abs/2004.00713v2
- Date: Mon, 24 Aug 2020 21:44:38 GMT
- Title: Memory-Efficient Incremental Learning Through Feature Adaptation
- Authors: Ahmet Iscen, Jeffrey Zhang, Svetlana Lazebnik, Cordelia Schmid
- Abstract summary: We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
- Score: 71.1449769528535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an approach for incremental learning that preserves feature
descriptors of training images from previously learned classes, instead of the
images themselves, unlike most existing work. Keeping the much
lower-dimensional feature embeddings of images reduces the memory footprint
significantly. We assume that the model is updated incrementally for new
classes as new data becomes available sequentially.This requires adapting the
previously stored feature vectors to the updated feature space without having
access to the corresponding original training images. Feature adaptation is
learned with a multi-layer perceptron, which is trained on feature pairs
corresponding to the outputs of the original and updated network on a training
image. We validate experimentally that such a transformation generalizes well
to the features of the previous set of classes, and maps features to a
discriminative subspace in the feature space. As a result, the classifier is
optimized jointly over new and old classes without requiring old class images.
Experimental results show that our method achieves state-of-the-art
classification accuracy in incremental learning benchmarks, while having at
least an order of magnitude lower memory footprint compared to image-preserving
strategies.
Related papers
- Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis [21.348252135252412]
Current Non-Exemplar Class-Incremental Learning (NECIL) methods mitigate forgetting by storing a single prototype per class.
We propose a more efficient NECIL method that replaces prototypes with synthesized retrospective features for old classes.
Our method significantly improves the efficiency of non-exemplar class-incremental learning and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-03T07:19:11Z) - Feature Expansion and enhanced Compression for Class Incremental Learning [3.3425792454347616]
We propose a new algorithm that enhances the compression of previous class knowledge by cutting and mixing patches of previous class samples with the new images during compression.
We show that this new data augmentation reduces catastrophic forgetting by specifically targeting past class information and improving its compression.
arXiv Detail & Related papers (2024-05-13T06:57:18Z) - Intra-task Mutual Attention based Vision Transformer for Few-Shot Learning [12.5354658533836]
Humans possess remarkable ability to accurately classify new, unseen images after being exposed to only a few examples.
For artificial neural network models, determining the most relevant features for distinguishing between two images with limited samples presents a challenge.
We propose an intra-task mutual attention method for few-shot learning, that involves splitting the support and query samples into patches.
arXiv Detail & Related papers (2024-05-06T02:02:57Z) - Class Incremental Learning with Pre-trained Vision-Language Models [59.15538370859431]
We propose an approach to exploiting pre-trained vision-language models (e.g. CLIP) that enables further adaptation.
Experiments on several conventional benchmarks consistently show a significant margin of improvement over the current state-of-the-art.
arXiv Detail & Related papers (2023-10-31T10:45:03Z) - DiffusePast: Diffusion-based Generative Replay for Class Incremental
Semantic Segmentation [73.54038780856554]
Class Incremental Semantic (CISS) extends the traditional segmentation task by incrementally learning newly added classes.
Previous work has introduced generative replay, which involves replaying old class samples generated from a pre-trained GAN.
We propose DiffusePast, a novel framework featuring a diffusion-based generative replay module that generates semantically accurate images with more reliable masks guided by different instructions.
arXiv Detail & Related papers (2023-08-02T13:13:18Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - CoReS: Compatible Representations via Stationarity [20.607894099896214]
In visual search systems, compatible features enable the direct comparison of old and new learned features allowing to use them interchangeably over time.
We propose CoReS, a new training procedure to learn representations that are textitcompatible with those previously learned.
We demonstrate that our training procedure largely outperforms the current state of the art and is particularly effective in the case of multiple upgrades of the training-set.
arXiv Detail & Related papers (2021-11-15T09:35:54Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - On the Exploration of Incremental Learning for Fine-grained Image
Retrieval [45.48333682748607]
We consider the problem of fine-grained image retrieval in an incremental setting, when new categories are added over time.
We propose an incremental learning method to mitigate retrieval performance degradation caused by the forgetting issue.
Our method effectively mitigates the catastrophic forgetting on the original classes while achieving high performance on the new classes.
arXiv Detail & Related papers (2020-10-15T21:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.