A Comprehensive Empirical Evaluation on Online Continual Learning
- URL: http://arxiv.org/abs/2308.10328v3
- Date: Sat, 23 Sep 2023 21:09:12 GMT
- Title: A Comprehensive Empirical Evaluation on Online Continual Learning
- Authors: Albin Soutif--Cormerais, Antonio Carta, Andrea Cossu, Julio Hurtado,
Hamed Hemati, Vincenzo Lomonaco, Joost Van de Weijer
- Abstract summary: We evaluate methods from the literature that tackle online continual learning.
We focus on the class-incremental setting in the context of image classification.
We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks.
- Score: 20.39495058720296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online continual learning aims to get closer to a live learning experience by
learning directly on a stream of data with temporally shifting distribution and
by storing a minimum amount of data from that stream. In this empirical
evaluation, we evaluate various methods from the literature that tackle online
continual learning. More specifically, we focus on the class-incremental
setting in the context of image classification, where the learner must learn
new classes incrementally from a stream of data. We compare these methods on
the Split-CIFAR100 and Split-TinyImagenet benchmarks, and measure their average
accuracy, forgetting, stability, and quality of the representations, to
evaluate various aspects of the algorithm at the end but also during the whole
training period. We find that most methods suffer from stability and
underfitting issues. However, the learned representations are comparable to
i.i.d. training under the same computational budget. No clear winner emerges
from the results and basic experience replay, when properly tuned and
implemented, is a very strong baseline. We release our modular and extensible
codebase at https://github.com/AlbinSou/ocl_survey based on the avalanche
framework to reproduce our results and encourage future research.
Related papers
- Random Representations Outperform Online Continually Learned Representations [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
Our method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all online continual learning benchmarks.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Revisiting Long-tailed Image Classification: Survey and Benchmarks with
New Evaluation Metrics [88.39382177059747]
A corpus of metrics is designed for measuring the accuracy, robustness, and bounds of algorithms for learning with long-tailed distribution.
Based on our benchmarks, we re-evaluate the performance of existing methods on CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2023-02-03T02:40:54Z) - Real-Time Evaluation in Online Continual Learning: A New Hope [104.53052316526546]
We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
arXiv Detail & Related papers (2023-02-02T12:21:10Z) - Standardized Evaluation of Machine Learning Methods for Evolving Data
Streams [11.17545155325116]
We propose a comprehensive set of properties for high-quality machine learning in evolving data streams.
We discuss sensible performance measures and evaluation strategies for online predictive modelling, online feature selection and concept drift detection.
The proposed evaluation standards are provided in a new Python framework called float.
arXiv Detail & Related papers (2022-04-28T16:40:33Z) - vCLIMB: A Novel Video Class Incremental Learning Benchmark [53.90485760679411]
We introduce vCLIMB, a novel video continual learning benchmark.
vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning.
We propose a temporal consistency regularization that can be applied on top of memory-based continual learning methods.
arXiv Detail & Related papers (2022-01-23T22:14:17Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Continual Contrastive Self-supervised Learning for Image Classification [10.070132585425938]
Self-supervise learning method shows tremendous potential on visual representation without any labeled data at scale.
To improve the visual representation of self-supervised learning, larger and more varied data is needed.
In this paper, we make the first attempt to implement the continual contrastive self-supervised learning by proposing a rehearsal method.
arXiv Detail & Related papers (2021-07-05T03:53:42Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.