Exemplar-free Online Continual Learning
- URL: http://arxiv.org/abs/2202.05491v1
- Date: Fri, 11 Feb 2022 08:03:22 GMT
- Title: Exemplar-free Online Continual Learning
- Authors: Jiangpeng He and Fengqing Zhu
- Abstract summary: Continual learning aims to learn new tasks from sequentially available data under the condition that each data is observed only once by the learner.
Recent works have made remarkable achievements by storing part of learned task data as exemplars for knowledge replay.
We propose a novel exemplar-free method by leveraging nearest-class-mean (NCM) classifier.
- Score: 7.800379384628357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Targeted for real world scenarios, online continual learning aims to learn
new tasks from sequentially available data under the condition that each data
is observed only once by the learner. Though recent works have made remarkable
achievements by storing part of learned task data as exemplars for knowledge
replay, the performance is greatly relied on the size of stored exemplars while
the storage consumption is a significant constraint in continual learning. In
addition, storing exemplars may not always be feasible for certain applications
due to privacy concerns. In this work, we propose a novel exemplar-free method
by leveraging nearest-class-mean (NCM) classifier where the class mean is
estimated during training phase on all data seen so far through online mean
update criteria. We focus on image classification task and conduct extensive
experiments on benchmark datasets including CIFAR-100 and Food-1k. The results
demonstrate that our method without using any exemplar outperforms
state-of-the-art exemplar-based approaches with large margins under standard
protocol (20 exemplars per class) and is able to achieve competitive
performance even with larger exemplar size (100 exemplars per class).
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - From Categories to Classifiers: Name-Only Continual Learning by Exploring the Web [118.67589717634281]
Continual learning often relies on the availability of extensive annotated datasets, an assumption that is unrealistically time-consuming and costly in practice.
We explore a novel paradigm termed name-only continual learning where time and cost constraints prohibit manual annotation.
Our proposed solution leverages the expansive and ever-evolving internet to query and download uncurated webly-supervised data for image classification.
arXiv Detail & Related papers (2023-11-19T10:43:43Z) - Prototype-Sample Relation Distillation: Towards Replay-Free Continual
Learning [14.462797749666992]
We propose a holistic approach to jointly learn the representation and class prototypes.
We propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data.
This method yields state-of-the-art performance in the task-incremental setting.
arXiv Detail & Related papers (2023-03-26T16:35:45Z) - Real-Time Evaluation in Online Continual Learning: A New Hope [104.53052316526546]
We evaluate current Continual Learning (CL) methods with respect to their computational costs.
A simple baseline outperforms state-of-the-art CL methods under this evaluation.
This surprisingly suggests that the majority of existing CL literature is tailored to a specific class of streams that is not practical.
arXiv Detail & Related papers (2023-02-02T12:21:10Z) - A soft nearest-neighbor framework for continual semi-supervised learning [35.957577587090604]
We propose an approach for continual semi-supervised learning where not all the data samples are labeled.
We leverage the power of nearest-neighbors to nonlinearly partition the feature space and flexibly model the underlying data distribution.
Our method works well on both low and high resolution images and scales seamlessly to more complex datasets.
arXiv Detail & Related papers (2022-12-09T20:03:59Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - Online Continual Learning Via Candidates Voting [7.704949298975352]
We introduce an effective and memory-efficient method for online continual learning under class-incremental setting.
Our proposed method achieves the best results under different benchmark datasets for online continual learning including CIFAR-10, CIFAR-100 and CORE-50.
arXiv Detail & Related papers (2021-10-17T15:45:32Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z) - OvA-INN: Continual Learning with Invertible Neural Networks [0.0]
OvA-INN is able to learn one class at a time and without storing any of the previous data.
We show that we can take advantage of pretrained models by stacking an Invertible Network on top of a feature extractor.
arXiv Detail & Related papers (2020-06-24T14:40:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.