Class-Incremental Learning with Repetition
- URL: http://arxiv.org/abs/2301.11396v2
- Date: Mon, 19 Jun 2023 06:48:43 GMT
- Title: Class-Incremental Learning with Repetition
- Authors: Hamed Hemati, Andrea Cossu, Antonio Carta, Julio Hurtado, Lorenzo
Pellegrini, Davide Bacciu, Vincenzo Lomonaco, Damian Borth
- Abstract summary: We focus on the family of Class-Incremental with Repetition (CIR) scenario, where repetition is embedded in the definition of the stream.
We propose two stream generators that produce a wide range of CIR streams starting from a single dataset and a few interpretable parameters.
We then present a novel replay strategy that exploits repetition and counteracts the natural imbalance present in the stream.
- Score: 17.89286445250716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world data streams naturally include the repetition of previous
concepts. From a Continual Learning (CL) perspective, repetition is a property
of the environment and, unlike replay, cannot be controlled by the agent.
Nowadays, the Class-Incremental (CI) scenario represents the leading test-bed
for assessing and comparing CL strategies. This scenario type is very easy to
use, but it never allows revisiting previously seen classes, thus completely
neglecting the role of repetition. We focus on the family of Class-Incremental
with Repetition (CIR) scenario, where repetition is embedded in the definition
of the stream. We propose two stochastic stream generators that produce a wide
range of CIR streams starting from a single dataset and a few interpretable
control parameters. We conduct the first comprehensive evaluation of repetition
in CL by studying the behavior of existing CL strategies under different CIR
streams. We then present a novel replay strategy that exploits repetition and
counteracts the natural imbalance present in the stream. On both CIFAR100 and
TinyImageNet, our strategy outperforms other replay approaches, which are not
designed for environments with repetition.
Related papers
- Incremental Learning with Repetition via Pseudo-Feature Projection [3.4734633097581815]
We investigate how exemplar-free incremental learning strategies are affected by data repetition.
Our proposed exemplar-free method achieves competitive results in the classic scenario without repetition, and state-of-the-art performance in the one with repetition.
arXiv Detail & Related papers (2025-02-27T09:43:35Z) - Continual Learning in the Presence of Repetition [29.03044158045849]
Continual learning (CL) provides a framework for training models in ever-evolving environments.
The concept of repetition in the data stream is not often considered in standard benchmarks for CL.
This report provides a summary of the CLVision challenge at CVPR 2023, which focused on the topic of repetition in class-incremental learning.
arXiv Detail & Related papers (2024-05-07T08:15:48Z) - Continual Referring Expression Comprehension via Dual Modular
Memorization [133.46886428655426]
Referring Expression (REC) aims to localize an image region of a given object described by a natural-language expression.
Existing REC algorithms make a strong assumption that training data feeding into a model are given upfront, which degrades its practicality for real-world scenarios.
In this paper, we propose Continual Referring Expression (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks.
In order to continuously improve the model on sequential tasks without forgetting prior learned knowledge and without repeatedly re-training from a scratch, we propose an effective baseline method named Dual Modular Memorization
arXiv Detail & Related papers (2023-11-25T02:58:51Z) - Efficient Curriculum based Continual Learning with Informative Subset
Selection for Remote Sensing Scene Classification [27.456319725214474]
We tackle the problem of class incremental learning (CIL) in the realm of landcover classification from optical remote sensing (RS) images.
We propose a novel CIL framework inspired by the recent success of replay-memory based approaches.
arXiv Detail & Related papers (2023-09-03T01:25:40Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - PCR: Proxy-based Contrastive Replay for Online Class-Incremental
Continual Learning [16.67238259139417]
Existing replay-based methods effectively alleviate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner.
We propose a novel replay-based method called proxy-based contrastive replay (PCR)
arXiv Detail & Related papers (2023-04-10T06:35:19Z) - Real-Time Evaluation in Online Continual Learning: A New Hope [104.53052316526546]
We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
arXiv Detail & Related papers (2023-02-02T12:21:10Z) - On Continual Model Refinement in Out-of-Distribution Data Streams [64.62569873799096]
Real-world natural language processing (NLP) models need to be continually updated to fix the prediction errors in out-of-distribution (OOD) data streams.
Existing continual learning (CL) problem setups cannot cover such a realistic and complex scenario.
We propose a new CL problem formulation dubbed continual model refinement (CMR)
arXiv Detail & Related papers (2022-05-04T11:54:44Z) - The CLEAR Benchmark: Continual LEArning on Real-World Imagery [77.98377088698984]
Continual learning (CL) is widely regarded as crucial challenge for lifelong AI.
We introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts.
We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms.
arXiv Detail & Related papers (2022-01-17T09:09:09Z) - Supervised Contrastive Replay: Revisiting the Nearest Class Mean
Classifier in Online Class-Incremental Continual Learning [17.310385256678654]
Class-incremental continual learning (CL) studies the problem of learning new classes continually from an online non-stationary data stream.
While memory replay has shown promising results, the recency bias in online learning caused by the commonly used Softmax classifier remains an unsolved challenge.
Although the Nearest-Class-Mean (NCM) classifier is significantly undervalued in the CL community, we demonstrate that it is a simple yet effective substitute for the Softmax classifier.
arXiv Detail & Related papers (2021-03-22T20:27:34Z) - Continual Learning for Recurrent Neural Networks: a Review and Empirical
Evaluation [12.27992745065497]
Continual Learning with recurrent neural networks could pave the way to a large number of applications where incoming data is non stationary.
We organize the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks.
We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications.
arXiv Detail & Related papers (2021-03-12T19:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.