Real-Time Evaluation in Online Continual Learning: A New Hope
- URL: http://arxiv.org/abs/2302.01047v3
- Date: Fri, 24 Mar 2023 07:23:36 GMT
- Title: Real-Time Evaluation in Online Continual Learning: A New Hope
- Authors: Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed
Al Kader Hammoud, Ameya Prabhu, Philip H. S. Torr, Bernard Ghanem
- Abstract summary: We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
- Score: 104.53052316526546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current evaluations of Continual Learning (CL) methods typically assume that
there is no constraint on training time and computation. This is an unrealistic
assumption for any real-world setting, which motivates us to propose: a
practical real-time evaluation of continual learning, in which the stream does
not wait for the model to complete training before revealing the next data for
predictions. To do this, we evaluate current CL methods with respect to their
computational costs. We conduct extensive experiments on CLOC, a large-scale
dataset containing 39 million time-stamped images with geolocation labels. We
show that a simple baseline outperforms state-of-the-art CL methods under this
evaluation, questioning the applicability of existing methods in realistic
settings. In addition, we explore various CL components commonly used in the
literature, including memory sampling strategies and regularization approaches.
We find that all considered methods fail to be competitive against our simple
baseline. This surprisingly suggests that the majority of existing CL
literature is tailored to a specific class of streams that is not practical. We
hope that the evaluation we provide will be the first step towards a paradigm
shift to consider the computational cost in the development of online continual
learning methods.
Related papers
- A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Density Distribution-based Learning Framework for Addressing Online
Continual Learning Challenges [4.715630709185073]
We introduce a density distribution-based learning framework for online Continual Learning.
Our framework achieves superior average accuracy and time-space efficiency.
Our method outperforms popular CL approaches by a significant margin.
arXiv Detail & Related papers (2023-11-22T09:21:28Z) - A Comprehensive Empirical Evaluation on Online Continual Learning [20.39495058720296]
We evaluate methods from the literature that tackle online continual learning.
We focus on the class-incremental setting in the context of image classification.
We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks.
arXiv Detail & Related papers (2023-08-20T17:52:02Z) - Computationally Budgeted Continual Learning: What Does Matter? [128.0827987414154]
Continual Learning (CL) aims to sequentially train models on streams of incoming data that vary in distribution by preserving previous knowledge while adapting to new data.
Current CL literature focuses on restricted access to previously seen data, while imposing no constraints on the computational budget for training.
We revisit this problem with a large-scale benchmark and analyze the performance of traditional CL approaches in a compute-constrained setting.
arXiv Detail & Related papers (2023-03-20T14:50:27Z) - Do Pre-trained Models Benefit Equally in Continual Learning? [25.959813589169176]
Existing work on continual learning (CL) is primarily devoted to developing algorithms for models trained from scratch.
Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drops in real-world scenarios.
This paper advocates the systematic introduction of pre-training to CL.
arXiv Detail & Related papers (2022-10-27T18:03:37Z) - Schedule-Robust Online Continual Learning [45.325658404913945]
A continual learning algorithm learns from a non-stationary data stream.
A key challenge in CL is to design methods robust against arbitrary schedules over the same underlying data.
We present a new perspective on CL, as the process of learning a schedule-robust predictor, followed by adapting the predictor using only replay data.
arXiv Detail & Related papers (2022-10-11T15:55:06Z) - A Study of Continual Learning Methods for Q-Learning [78.6363825307044]
We present an empirical study on the use of continual learning (CL) methods in a reinforcement learning (RL) scenario.
Our results show that dedicated CL methods can significantly improve learning when compared to the baseline technique of "experience replay"
arXiv Detail & Related papers (2022-06-08T14:51:52Z) - The CLEAR Benchmark: Continual LEArning on Real-World Imagery [77.98377088698984]
Continual learning (CL) is widely regarded as crucial challenge for lifelong AI.
We introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts.
We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms.
arXiv Detail & Related papers (2022-01-17T09:09:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.