Move-to-Data: A new Continual Learning approach with Deep CNNs,
Application for image-class recognition
- URL: http://arxiv.org/abs/2006.07152v1
- Date: Fri, 12 Jun 2020 13:04:58 GMT
- Title: Move-to-Data: A new Continual Learning approach with Deep CNNs,
Application for image-class recognition
- Authors: Miltiadis Poursanidis (LaBRI), Jenny Benois-Pineau (LaBRI), Akka
Zemmari (LaBRI), Boris Mansenca (LaBRI), Aymar de Rugy (INCIA)
- Abstract summary: It is necessary to pre-train the model at a "training recording phase" and then adjust it to the new coming data.
We propose a fast continual learning layer at the end of the neuronal network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many real-life tasks of application of supervised learning approaches, all
the training data are not available at the same time. The examples are lifelong
image classification or recognition of environmental objects during interaction
of instrumented persons with their environment, enrichment of an
online-database with more images. It is necessary to pre-train the model at a
"training recording phase" and then adjust it to the new coming data. This is
the task of incremental/continual learning approaches. Amongst different
problems to be solved by these approaches such as introduction of new
categories in the model, refining existing categories to sub-categories and
extending trained classifiers over them, ... we focus on the problem of
adjusting pre-trained model with new additional training data for existing
categories. We propose a fast continual learning layer at the end of the
neuronal network. Obtained results are illustrated on the opensource CIFAR
benchmark dataset. The proposed scheme yields similar performances as
retraining but with drastically lower computational cost.
Related papers
- CLOFAI: A Dataset of Real And Fake Image Classification Tasks for Continual Learning [1.7256001727746018]
We introduce a new dataset called CLOFAI (Continual Learning On Fake and Authentic Images)
It takes the form of a domain-incremental image classification problem.
In doing this, we set a baseline on our novel dataset using three foundational continual learning methods.
arXiv Detail & Related papers (2025-01-19T18:53:30Z) - Transferable Post-training via Inverse Value Learning [83.75002867411263]
We propose modeling changes at the logits level during post-training using a separate neural network (i.e., the value network)
After training this network on a small base model using demonstrations, this network can be seamlessly integrated with other pre-trained models during inference.
We demonstrate that the resulting value network has broad transferability across pre-trained models of different parameter sizes.
arXiv Detail & Related papers (2024-10-28T13:48:43Z) - Learning from Neighbors: Category Extrapolation for Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Prototypical quadruplet for few-shot class incremental learning [24.814045065163135]
We propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss.
Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes.
We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
arXiv Detail & Related papers (2022-11-05T17:19:14Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Reinforcement Learning Approach to Active Learning for Image
Classification [0.0]
This thesis works on active learning as one possible solution to reduce the amount of data that needs to be processed by hand.
A newly proposed framework for framing the active learning workflow as a reinforcement learning problem is adapted for image classification.
arXiv Detail & Related papers (2021-08-12T08:34:02Z) - A Survey on Self-supervised Pre-training for Sequential Transfer
Learning in Neural Networks [1.1802674324027231]
Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data.
We provide an overview of the taxonomy for self-supervised learning and transfer learning, and highlight some prominent methods for designing pre-training tasks across different domains.
arXiv Detail & Related papers (2020-07-01T22:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.